Test Report: Hyperkit_macOS 20045

                    
                      70ee1ceb4b2f7849aa4717a6092bbfa282d9029b:2024-12-04:37344
                    
                

Test fail (13/324)

x
+
TestOffline (195.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-182000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-182000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.77037959s)

                                                
                                                
-- stdout --
	* [offline-docker-182000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-182000" primary control-plane node in "offline-docker-182000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-182000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 16:04:18.325281   22192 out.go:345] Setting OutFile to fd 1 ...
	I1204 16:04:18.325532   22192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:04:18.325537   22192 out.go:358] Setting ErrFile to fd 2...
	I1204 16:04:18.325541   22192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:04:18.325718   22192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 16:04:18.327849   22192 out.go:352] Setting JSON to false
	I1204 16:04:18.361268   22192 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7428,"bootTime":1733349630,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 16:04:18.361441   22192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 16:04:18.415774   22192 out.go:177] * [offline-docker-182000] minikube v1.34.0 on Darwin 15.0.1
	I1204 16:04:18.463936   22192 notify.go:220] Checking for updates...
	I1204 16:04:18.489641   22192 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 16:04:18.528840   22192 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 16:04:18.549761   22192 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 16:04:18.575038   22192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 16:04:18.596711   22192 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:04:18.617788   22192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 16:04:18.638888   22192 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 16:04:18.670722   22192 out.go:177] * Using the hyperkit driver based on user configuration
	I1204 16:04:18.715562   22192 start.go:297] selected driver: hyperkit
	I1204 16:04:18.715580   22192 start.go:901] validating driver "hyperkit" against <nil>
	I1204 16:04:18.715598   22192 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 16:04:18.720871   22192 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:04:18.721021   22192 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 16:04:18.732346   22192 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 16:04:18.738851   22192 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:04:18.738876   22192 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 16:04:18.738908   22192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 16:04:18.739143   22192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 16:04:18.739179   22192 cni.go:84] Creating CNI manager for ""
	I1204 16:04:18.739215   22192 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 16:04:18.739220   22192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 16:04:18.739293   22192 start.go:340] cluster config:
	{Name:offline-docker-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 16:04:18.739381   22192 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:04:18.760847   22192 out.go:177] * Starting "offline-docker-182000" primary control-plane node in "offline-docker-182000" cluster
	I1204 16:04:18.802513   22192 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 16:04:18.802545   22192 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 16:04:18.802562   22192 cache.go:56] Caching tarball of preloaded images
	I1204 16:04:18.802681   22192 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 16:04:18.802689   22192 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 16:04:18.802948   22192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/offline-docker-182000/config.json ...
	I1204 16:04:18.802970   22192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/offline-docker-182000/config.json: {Name:mk2b3146498355c68304348fcb82cf1b0b514a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 16:04:18.803347   22192 start.go:360] acquireMachinesLock for offline-docker-182000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:04:18.803422   22192 start.go:364] duration metric: took 62.968µs to acquireMachinesLock for "offline-docker-182000"
	I1204 16:04:18.803442   22192 start.go:93] Provisioning new machine with config: &{Name:offline-docker-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:04:18.803483   22192 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:04:18.824989   22192 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:04:18.825207   22192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:04:18.825272   22192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:04:18.837738   22192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60522
	I1204 16:04:18.838062   22192 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:04:18.838471   22192 main.go:141] libmachine: Using API Version  1
	I1204 16:04:18.838481   22192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:04:18.838734   22192 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:04:18.838850   22192 main.go:141] libmachine: (offline-docker-182000) Calling .GetMachineName
	I1204 16:04:18.838964   22192 main.go:141] libmachine: (offline-docker-182000) Calling .DriverName
	I1204 16:04:18.839099   22192 start.go:159] libmachine.API.Create for "offline-docker-182000" (driver="hyperkit")
	I1204 16:04:18.839125   22192 client.go:168] LocalClient.Create starting
	I1204 16:04:18.839162   22192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:04:18.839222   22192 main.go:141] libmachine: Decoding PEM data...
	I1204 16:04:18.839240   22192 main.go:141] libmachine: Parsing certificate...
	I1204 16:04:18.839325   22192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:04:18.839372   22192 main.go:141] libmachine: Decoding PEM data...
	I1204 16:04:18.839385   22192 main.go:141] libmachine: Parsing certificate...
	I1204 16:04:18.839398   22192 main.go:141] libmachine: Running pre-create checks...
	I1204 16:04:18.839407   22192 main.go:141] libmachine: (offline-docker-182000) Calling .PreCreateCheck
	I1204 16:04:18.839492   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:18.839653   22192 main.go:141] libmachine: (offline-docker-182000) Calling .GetConfigRaw
	I1204 16:04:18.848541   22192 main.go:141] libmachine: Creating machine...
	I1204 16:04:18.848558   22192 main.go:141] libmachine: (offline-docker-182000) Calling .Create
	I1204 16:04:18.848748   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:18.848986   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:04:18.848705   22212 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:04:18.849063   22192 main.go:141] libmachine: (offline-docker-182000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:04:19.321301   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:04:19.321178   22212 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/id_rsa...
	I1204 16:04:19.442498   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:04:19.442416   22212 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk...
	I1204 16:04:19.442512   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Writing magic tar header
	I1204 16:04:19.442521   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Writing SSH key tar header
	I1204 16:04:19.443222   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:04:19.443173   22212 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000 ...
	I1204 16:04:19.827667   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:19.827688   22192 main.go:141] libmachine: (offline-docker-182000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid
	I1204 16:04:19.827775   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Using UUID a0dd8e19-3fd4-45ab-97ca-c5f1782110d0
	I1204 16:04:19.959685   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Generated MAC 6e:59:53:f0:4c:78
	I1204 16:04:19.959702   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000
	I1204 16:04:19.959737   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a0dd8e19-3fd4-45ab-97ca-c5f1782110d0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1204 16:04:19.959769   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a0dd8e19-3fd4-45ab-97ca-c5f1782110d0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1204 16:04:19.959820   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a0dd8e19-3fd4-45ab-97ca-c5f1782110d0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bz
image,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000"}
	I1204 16:04:19.959859   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a0dd8e19-3fd4-45ab-97ca-c5f1782110d0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikub
e/machines/offline-docker-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000"
	I1204 16:04:19.959887   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:04:19.963292   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 DEBUG: hyperkit: Pid is 22233
	I1204 16:04:19.963884   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 0
	I1204 16:04:19.963920   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:19.964024   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:19.965699   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:19.965920   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:19.965939   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:19.965985   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:19.966012   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:19.966025   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:19.966036   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:19.966048   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:19.966064   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:19.966086   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:19.966112   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:19.966133   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:19.966147   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:19.966156   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:19.966161   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:19.966167   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:19.966172   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:19.966178   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:19.966183   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:19.966190   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:19.974180   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:04:20.033066   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:04:20.034158   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:04:20.034174   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:04:20.034191   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:04:20.034199   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:04:20.421813   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:04:20.421825   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:04:20.536551   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:04:20.536589   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:04:20.536609   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:04:20.536631   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:04:20.537395   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:04:20.537406   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:04:21.967447   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 1
	I1204 16:04:21.967467   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:21.967581   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:21.968644   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:21.968802   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:21.968813   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:21.968820   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:21.968826   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:21.968861   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:21.968872   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:21.968891   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:21.968921   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:21.968936   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:21.968946   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:21.968961   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:21.968983   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:21.968991   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:21.968997   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:21.969005   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:21.969014   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:21.969022   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:21.969030   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:21.969036   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:23.971075   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 2
	I1204 16:04:23.971091   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:23.971143   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:23.972128   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:23.972243   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:23.972251   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:23.972267   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:23.972275   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:23.972283   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:23.972291   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:23.972306   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:23.972318   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:23.972325   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:23.972331   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:23.972337   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:23.972344   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:23.972362   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:23.972376   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:23.972383   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:23.972391   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:23.972398   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:23.972403   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:23.972408   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:25.897040   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:04:25.897130   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:04:25.897140   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:04:25.921857   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:04:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:04:25.972493   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 3
	I1204 16:04:25.972512   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:25.972611   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:25.973575   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:25.973668   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:25.973676   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:25.973684   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:25.973691   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:25.973697   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:25.973705   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:25.973712   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:25.973720   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:25.973727   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:25.973733   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:25.973758   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:25.973772   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:25.973779   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:25.973787   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:25.973796   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:25.973804   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:25.973811   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:25.973818   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:25.973826   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:27.973910   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 4
	I1204 16:04:27.973924   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:27.973997   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:27.975035   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:27.975161   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:27.975169   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:27.975177   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:27.975195   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:27.975205   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:27.975233   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:27.975242   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:27.975255   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:27.975267   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:27.975276   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:27.975284   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:27.975292   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:27.975299   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:27.975313   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:27.975325   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:27.975333   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:27.975339   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:27.975345   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:27.975353   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:29.975683   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 5
	I1204 16:04:29.975703   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:29.975763   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:29.976812   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:29.976908   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:29.976917   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:29.976925   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:29.976931   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:29.976948   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:29.976960   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:29.976967   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:29.976975   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:29.976984   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:29.977007   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:29.977013   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:29.977019   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:29.977024   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:29.977030   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:29.977041   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:29.977049   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:29.977057   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:29.977065   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:29.977081   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:31.978318   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 6
	I1204 16:04:31.978330   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:31.978404   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:31.979384   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:31.979432   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:31.979443   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:31.979451   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:31.979458   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:31.979464   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:31.979469   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:31.979475   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:31.979482   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:31.979496   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:31.979506   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:31.979521   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:31.979540   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:31.979548   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:31.979554   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:31.979571   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:31.979584   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:31.979591   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:31.979599   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:31.979611   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:33.981654   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 7
	I1204 16:04:33.981666   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:33.981726   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:33.982689   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:33.982779   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:33.982789   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:33.982797   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:33.982802   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:33.982808   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:33.982817   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:33.982829   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:33.982836   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:33.982853   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:33.982859   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:33.982865   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:33.982873   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:33.982891   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:33.982904   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:33.982915   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:33.982923   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:33.982931   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:33.982944   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:33.982953   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:35.983419   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 8
	I1204 16:04:35.983434   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:35.983512   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:35.984482   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:35.984606   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:35.984615   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:35.984622   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:35.984630   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:35.984640   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:35.984648   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:35.984655   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:35.984661   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:35.984668   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:35.984675   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:35.984685   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:35.984695   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:35.984705   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:35.984712   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:35.984718   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:35.984730   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:35.984746   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:35.984760   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:35.984786   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:37.986614   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 9
	I1204 16:04:37.986630   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:37.986690   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:37.987682   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:37.987735   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:37.987748   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:37.987757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:37.987766   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:37.987772   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:37.987779   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:37.987785   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:37.987790   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:37.987797   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:37.987810   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:37.987817   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:37.987831   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:37.987841   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:37.987849   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:37.987860   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:37.987865   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:37.987871   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:37.987876   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:37.987896   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:39.990012   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 10
	I1204 16:04:39.990031   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:39.990083   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:39.991080   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:39.991181   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:39.991193   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:39.991201   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:39.991210   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:39.991219   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:39.991228   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:39.991239   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:39.991259   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:39.991273   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:39.991290   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:39.991299   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:39.991307   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:39.991313   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:39.991331   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:39.991348   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:39.991360   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:39.991369   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:39.991376   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:39.991385   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:41.993488   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 11
	I1204 16:04:41.993503   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:41.993606   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:41.994605   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:41.994681   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:41.994696   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:41.994715   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:41.994733   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:41.994741   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:41.994749   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:41.994765   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:41.994776   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:41.994795   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:41.994808   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:41.994816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:41.994824   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:41.994831   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:41.994838   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:41.994854   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:41.994867   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:41.994885   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:41.994894   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:41.994902   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:43.995736   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 12
	I1204 16:04:43.995749   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:43.995793   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:43.996785   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:43.996873   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:43.996881   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:43.996898   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:43.996904   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:43.996910   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:43.996915   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:43.996921   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:43.996927   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:43.996935   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:43.996941   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:43.996948   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:43.996959   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:43.996965   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:43.996977   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:43.996985   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:43.997001   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:43.997013   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:43.997028   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:43.997039   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:45.998527   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 13
	I1204 16:04:45.998547   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:45.998659   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:45.999639   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:45.999753   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:45.999765   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:45.999772   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:45.999790   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:45.999799   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:45.999807   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:45.999816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:45.999822   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:45.999836   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:45.999847   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:45.999854   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:45.999862   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:45.999874   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:45.999884   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:45.999890   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:45.999908   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:45.999918   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:45.999925   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:45.999932   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:47.999987   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 14
	I1204 16:04:48.000003   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:48.000066   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:48.001035   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:48.001117   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:48.001124   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:48.001132   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:48.001139   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:48.001154   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:48.001173   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:48.001185   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:48.001197   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:48.001246   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:48.001258   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:48.001265   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:48.001271   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:48.001277   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:48.001283   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:48.001291   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:48.001308   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:48.001320   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:48.001327   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:48.001336   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:50.002741   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 15
	I1204 16:04:50.002757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:50.002782   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:50.003869   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:50.004002   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:50.004012   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:50.004021   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:50.004028   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:50.004034   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:50.004040   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:50.004072   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:50.004085   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:50.004094   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:50.004104   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:50.004111   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:50.004124   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:50.004132   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:50.004139   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:50.004146   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:50.004153   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:50.004159   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:50.004166   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:50.004183   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:52.004238   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 16
	I1204 16:04:52.004253   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:52.004358   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:52.005299   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:52.005393   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:52.005407   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:52.005415   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:52.005431   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:52.005439   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:52.005447   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:52.005453   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:52.005459   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:52.005466   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:52.005471   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:52.005489   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:52.005503   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:52.005521   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:52.005531   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:52.005540   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:52.005556   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:52.005568   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:52.005576   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:52.005590   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:54.007584   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 17
	I1204 16:04:54.007596   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:54.007671   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:54.008628   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:54.008711   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:54.008720   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:54.008729   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:54.008736   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:54.008752   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:54.008766   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:54.008783   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:54.008791   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:54.008798   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:54.008806   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:54.008817   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:54.008827   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:54.008836   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:54.008850   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:54.008865   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:54.008884   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:54.008893   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:54.008899   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:54.008913   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:56.010462   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 18
	I1204 16:04:56.010477   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:56.010545   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:56.011573   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:56.011663   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:56.011673   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:56.011682   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:56.011691   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:56.011700   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:56.011712   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:56.011728   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:56.011739   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:56.011746   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:56.011754   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:56.011761   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:56.011768   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:56.011776   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:56.011784   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:56.011802   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:56.011815   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:56.011822   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:56.011829   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:56.011835   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:04:58.013903   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 19
	I1204 16:04:58.013916   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:04:58.013925   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:04:58.014993   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:04:58.015041   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:04:58.015051   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:04:58.015060   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:04:58.015067   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:04:58.015089   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:04:58.015104   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:04:58.015127   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:04:58.015140   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:04:58.015156   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:04:58.015164   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:04:58.015178   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:04:58.015189   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:04:58.015196   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:04:58.015202   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:04:58.015222   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:04:58.015235   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:04:58.015243   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:04:58.015250   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:04:58.015258   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:00.015422   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 20
	I1204 16:05:00.015438   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:00.015505   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:00.016698   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:00.016786   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:00.016795   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:00.016803   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:00.016811   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:00.016825   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:00.016832   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:00.016848   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:00.016859   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:00.016882   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:00.016892   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:00.016900   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:00.016908   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:00.016920   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:00.016929   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:00.016936   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:00.016942   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:00.016949   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:00.016956   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:00.016964   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:02.017696   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 21
	I1204 16:05:02.017708   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:02.017757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:02.018754   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:02.018820   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:02.018830   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:02.018840   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:02.018851   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:02.018873   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:02.018882   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:02.018890   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:02.018899   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:02.018907   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:02.018924   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:02.018933   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:02.018949   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:02.018962   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:02.018978   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:02.018986   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:02.018993   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:02.019001   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:02.019007   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:02.019013   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:04.021035   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 22
	I1204 16:05:04.021049   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:04.021198   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:04.022212   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:04.022257   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:04.022280   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:04.022289   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:04.022297   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:04.022303   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:04.022321   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:04.022329   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:04.022346   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:04.022358   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:04.022366   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:04.022373   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:04.022380   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:04.022387   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:04.022394   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:04.022402   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:04.022408   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:04.022418   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:04.022435   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:04.022445   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:06.022702   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 23
	I1204 16:05:06.022718   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:06.022787   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:06.023765   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:06.023881   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:06.023888   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:06.023900   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:06.023933   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:06.023954   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:06.023968   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:06.023975   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:06.023982   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:06.023990   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:06.023997   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:06.024005   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:06.024013   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:06.024020   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:06.024025   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:06.024035   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:06.024046   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:06.024053   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:06.024060   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:06.024068   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:08.024705   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 24
	I1204 16:05:08.024717   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:08.024778   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:08.025809   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:08.025864   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:08.025873   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:08.025881   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:08.025887   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:08.025893   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:08.025901   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:08.025920   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:08.025932   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:08.025940   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:08.025949   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:08.025957   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:08.025964   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:08.025971   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:08.025978   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:08.025985   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:08.026005   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:08.026012   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:08.026019   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:08.026027   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:10.027616   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 25
	I1204 16:05:10.027631   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:10.027657   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:10.028671   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:10.028744   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:10.028756   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:10.028781   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:10.028791   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:10.028808   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:10.028816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:10.028823   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:10.028833   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:10.028849   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:10.028862   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:10.028869   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:10.028877   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:10.028891   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:10.028910   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:10.028918   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:10.028923   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:10.028929   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:10.028935   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:10.028942   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:12.029167   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 26
	I1204 16:05:12.029179   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:12.029239   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:12.030229   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:12.030323   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:12.030331   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:12.030340   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:12.030345   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:12.030351   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:12.030356   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:12.030378   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:12.030387   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:12.030396   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:12.030401   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:12.030419   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:12.030442   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:12.030449   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:12.030464   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:12.030473   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:12.030480   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:12.030487   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:12.030495   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:12.030503   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:14.031960   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 27
	I1204 16:05:14.031971   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:14.032045   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:14.033045   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:14.033113   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:14.033123   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:14.033139   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:14.033158   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:14.033182   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:14.033198   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:14.033209   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:14.033227   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:14.033238   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:14.033249   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:14.033257   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:14.033272   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:14.033286   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:14.033294   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:14.033300   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:14.033306   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:14.033314   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:14.033321   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:14.033326   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:16.035010   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 28
	I1204 16:05:16.035023   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:16.035093   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:16.036129   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:16.036220   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:16.036229   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:16.036238   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:16.036243   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:16.036249   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:16.036254   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:16.036261   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:16.036266   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:16.036282   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:16.036295   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:16.036305   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:16.036312   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:16.036327   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:16.036336   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:16.036343   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:16.036351   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:16.036357   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:16.036379   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:16.036392   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:18.037410   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 29
	I1204 16:05:18.037425   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:18.037471   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:18.038458   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 6e:59:53:f0:4c:78 in /var/db/dhcpd_leases ...
	I1204 16:05:18.038541   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:18.038553   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:18.038561   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:18.038567   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:18.038592   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:18.038606   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:18.038617   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:18.038626   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:18.038634   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:18.038641   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:18.038648   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:18.038656   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:18.038669   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:18.038685   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:18.038696   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:18.038711   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:18.038717   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:18.038724   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:18.038732   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:20.040246   22192 client.go:171] duration metric: took 1m1.199272576s to LocalClient.Create
	I1204 16:05:22.042398   22192 start.go:128] duration metric: took 1m3.237005602s to createHost
	I1204 16:05:22.042415   22192 start.go:83] releasing machines lock for "offline-docker-182000", held for 1m3.237089512s
	W1204 16:05:22.042432   22192 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:59:53:f0:4c:78
	I1204 16:05:22.042800   22192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:05:22.042825   22192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:05:22.053985   22192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60558
	I1204 16:05:22.054282   22192 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:05:22.054649   22192 main.go:141] libmachine: Using API Version  1
	I1204 16:05:22.054665   22192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:05:22.054868   22192 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:05:22.055218   22192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:05:22.055245   22192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:05:22.066155   22192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60560
	I1204 16:05:22.066491   22192 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:05:22.066837   22192 main.go:141] libmachine: Using API Version  1
	I1204 16:05:22.066849   22192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:05:22.067103   22192 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:05:22.067220   22192 main.go:141] libmachine: (offline-docker-182000) Calling .GetState
	I1204 16:05:22.067322   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.067384   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:22.068576   22192 main.go:141] libmachine: (offline-docker-182000) Calling .DriverName
	I1204 16:05:22.089678   22192 out.go:177] * Deleting "offline-docker-182000" in hyperkit ...
	I1204 16:05:22.148686   22192 main.go:141] libmachine: (offline-docker-182000) Calling .Remove
	I1204 16:05:22.148805   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.148814   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.148878   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:22.150048   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.150100   22192 main.go:141] libmachine: (offline-docker-182000) DBG | waiting for graceful shutdown
	I1204 16:05:23.150696   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:23.150824   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:23.152005   22192 main.go:141] libmachine: (offline-docker-182000) DBG | waiting for graceful shutdown
	I1204 16:05:24.152203   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:24.152294   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:24.153628   22192 main.go:141] libmachine: (offline-docker-182000) DBG | waiting for graceful shutdown
	I1204 16:05:25.154575   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:25.154636   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:25.155378   22192 main.go:141] libmachine: (offline-docker-182000) DBG | waiting for graceful shutdown
	I1204 16:05:26.155607   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:26.155691   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:26.156907   22192 main.go:141] libmachine: (offline-docker-182000) DBG | waiting for graceful shutdown
	I1204 16:05:27.158632   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:27.158725   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22233
	I1204 16:05:27.159424   22192 main.go:141] libmachine: (offline-docker-182000) DBG | sending sigkill
	I1204 16:05:27.159433   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1204 16:05:27.171370   22192 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:59:53:f0:4c:78
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:59:53:f0:4c:78
	I1204 16:05:27.171389   22192 start.go:729] Will try again in 5 seconds ...
	I1204 16:05:27.184200   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:05:27 WARN : hyperkit: failed to read stdout: EOF
	I1204 16:05:27.184217   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:05:27 WARN : hyperkit: failed to read stderr: EOF
	I1204 16:05:32.173614   22192 start.go:360] acquireMachinesLock for offline-docker-182000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:06:24.933174   22192 start.go:364] duration metric: took 52.757911844s to acquireMachinesLock for "offline-docker-182000"
	I1204 16:06:24.933205   22192 start.go:93] Provisioning new machine with config: &{Name:offline-docker-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:06:24.933253   22192 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:06:24.954606   22192 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:06:24.954692   22192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:06:24.954716   22192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:06:24.965701   22192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60568
	I1204 16:06:24.966006   22192 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:06:24.966370   22192 main.go:141] libmachine: Using API Version  1
	I1204 16:06:24.966392   22192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:06:24.966603   22192 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:06:24.966694   22192 main.go:141] libmachine: (offline-docker-182000) Calling .GetMachineName
	I1204 16:06:24.966809   22192 main.go:141] libmachine: (offline-docker-182000) Calling .DriverName
	I1204 16:06:24.966930   22192 start.go:159] libmachine.API.Create for "offline-docker-182000" (driver="hyperkit")
	I1204 16:06:24.966947   22192 client.go:168] LocalClient.Create starting
	I1204 16:06:24.966972   22192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:06:24.967032   22192 main.go:141] libmachine: Decoding PEM data...
	I1204 16:06:24.967043   22192 main.go:141] libmachine: Parsing certificate...
	I1204 16:06:24.967084   22192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:06:24.967132   22192 main.go:141] libmachine: Decoding PEM data...
	I1204 16:06:24.967145   22192 main.go:141] libmachine: Parsing certificate...
	I1204 16:06:24.967157   22192 main.go:141] libmachine: Running pre-create checks...
	I1204 16:06:24.967162   22192 main.go:141] libmachine: (offline-docker-182000) Calling .PreCreateCheck
	I1204 16:06:24.967251   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:24.967315   22192 main.go:141] libmachine: (offline-docker-182000) Calling .GetConfigRaw
	I1204 16:06:24.996575   22192 main.go:141] libmachine: Creating machine...
	I1204 16:06:24.996586   22192 main.go:141] libmachine: (offline-docker-182000) Calling .Create
	I1204 16:06:24.996681   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:24.996836   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:06:24.996673   22405 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:06:24.996891   22192 main.go:141] libmachine: (offline-docker-182000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:06:25.246481   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:06:25.246407   22405 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/id_rsa...
	I1204 16:06:25.330532   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:06:25.330455   22405 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk...
	I1204 16:06:25.330541   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Writing magic tar header
	I1204 16:06:25.330549   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Writing SSH key tar header
	I1204 16:06:25.331169   22192 main.go:141] libmachine: (offline-docker-182000) DBG | I1204 16:06:25.331123   22405 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000 ...
	I1204 16:06:25.717604   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:25.717628   22192 main.go:141] libmachine: (offline-docker-182000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid
	I1204 16:06:25.717638   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Using UUID 75c165ee-6bcc-40e8-a3c5-e33d7131c78f
	I1204 16:06:25.741911   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Generated MAC 9a:6b:88:99:80:0f
	I1204 16:06:25.741930   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000
	I1204 16:06:25.741964   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"75c165ee-6bcc-40e8-a3c5-e33d7131c78f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1204 16:06:25.741996   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"75c165ee-6bcc-40e8-a3c5-e33d7131c78f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1204 16:06:25.742063   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "75c165ee-6bcc-40e8-a3c5-e33d7131c78f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bz
image,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000"}
	I1204 16:06:25.742115   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 75c165ee-6bcc-40e8-a3c5-e33d7131c78f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/offline-docker-182000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikub
e/machines/offline-docker-182000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-182000"
	I1204 16:06:25.742128   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:06:25.746042   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 DEBUG: hyperkit: Pid is 22406
	I1204 16:06:25.746494   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 0
	I1204 16:06:25.746514   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:25.746617   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:25.747998   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:25.748068   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:25.748077   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:25.748086   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:25.748092   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:25.748102   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:25.748121   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:25.748135   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:25.748143   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:25.748148   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:25.748163   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:25.748179   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:25.748187   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:25.748192   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:25.748199   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:25.748204   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:25.748226   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:25.748238   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:25.748250   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:25.748262   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:25.755451   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:06:25.764149   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/offline-docker-182000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:06:25.765063   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:06:25.765089   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:06:25.765101   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:06:25.765117   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:06:26.150939   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:06:26.150955   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:06:26.265540   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:06:26.265593   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:06:26.265612   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:06:26.265651   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:06:26.266453   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:06:26.266462   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:06:27.748503   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 1
	I1204 16:06:27.748517   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:27.748595   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:27.749612   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:27.749688   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:27.749700   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:27.749709   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:27.749715   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:27.749733   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:27.749744   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:27.749761   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:27.749769   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:27.749775   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:27.749782   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:27.749795   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:27.749807   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:27.749816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:27.749823   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:27.749830   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:27.749840   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:27.749847   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:27.749858   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:27.749866   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:29.751527   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 2
	I1204 16:06:29.751541   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:29.751602   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:29.752610   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:29.752696   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:29.752704   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:29.752712   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:29.752720   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:29.752738   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:29.752764   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:29.752773   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:29.752780   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:29.752786   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:29.752794   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:29.752806   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:29.752815   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:29.752830   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:29.752842   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:29.752862   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:29.752871   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:29.752878   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:29.752886   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:29.752897   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:31.619023   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:06:31.619158   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:06:31.619170   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:06:31.638956   22192 main.go:141] libmachine: (offline-docker-182000) DBG | 2024/12/04 16:06:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:06:31.754547   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 3
	I1204 16:06:31.754571   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:31.754726   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:31.756007   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:31.756209   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:31.756222   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:31.756233   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:31.756243   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:31.756253   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:31.756261   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:31.756284   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:31.756303   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:31.756320   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:31.756329   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:31.756341   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:31.756356   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:31.756365   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:31.756375   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:31.756386   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:31.756394   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:31.756412   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:31.756428   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:31.756441   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:33.756892   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 4
	I1204 16:06:33.756918   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:33.756965   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:33.757973   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:33.758075   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:33.758087   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:33.758095   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:33.758103   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:33.758110   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:33.758117   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:33.758126   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:33.758133   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:33.758138   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:33.758150   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:33.758159   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:33.758168   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:33.758176   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:33.758182   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:33.758193   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:33.758200   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:33.758208   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:33.758214   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:33.758221   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:35.760421   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 5
	I1204 16:06:35.760437   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:35.760490   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:35.761554   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:35.761627   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:35.761643   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:35.761652   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:35.761658   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:35.761687   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:35.761701   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:35.761716   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:35.761728   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:35.761735   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:35.761743   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:35.761752   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:35.761761   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:35.761775   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:35.761788   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:35.761808   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:35.761819   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:35.761827   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:35.761834   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:35.761850   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:37.763889   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 6
	I1204 16:06:37.763903   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:37.763946   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:37.765037   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:37.765149   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:37.765156   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:37.765170   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:37.765178   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:37.765194   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:37.765206   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:37.765214   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:37.765222   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:37.765230   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:37.765236   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:37.765243   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:37.765269   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:37.765280   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:37.765288   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:37.765306   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:37.765319   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:37.765332   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:37.765342   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:37.765350   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:39.766219   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 7
	I1204 16:06:39.766234   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:39.766291   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:39.767402   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:39.767478   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:39.767488   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:39.767504   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:39.767510   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:39.767516   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:39.767521   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:39.767550   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:39.767560   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:39.767569   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:39.767577   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:39.767586   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:39.767594   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:39.767600   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:39.767606   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:39.767620   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:39.767631   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:39.767639   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:39.767656   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:39.767672   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:41.769784   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 8
	I1204 16:06:41.769807   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:41.769817   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:41.770903   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:41.770999   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:41.771010   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:41.771019   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:41.771025   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:41.771032   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:41.771038   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:41.771048   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:41.771055   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:41.771064   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:41.771070   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:41.771078   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:41.771095   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:41.771105   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:41.771112   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:41.771119   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:41.771127   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:41.771139   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:41.771154   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:41.771171   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:43.772745   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 9
	I1204 16:06:43.772760   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:43.772831   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:43.773869   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:43.773937   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:43.773945   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:43.773955   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:43.773967   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:43.773977   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:43.773989   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:43.773997   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:43.774004   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:43.774018   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:43.774026   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:43.774032   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:43.774039   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:43.774046   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:43.774054   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:43.774068   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:43.774080   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:43.774088   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:43.774093   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:43.774102   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:45.775136   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 10
	I1204 16:06:45.775151   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:45.775206   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:45.776410   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:45.776530   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:45.776540   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:45.776550   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:45.776556   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:45.776563   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:45.776588   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:45.776606   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:45.776617   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:45.776625   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:45.776633   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:45.776641   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:45.776649   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:45.776655   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:45.776661   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:45.776667   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:45.776673   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:45.776682   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:45.776699   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:45.776710   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:47.777723   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 11
	I1204 16:06:47.777739   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:47.777799   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:47.778782   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:47.778846   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:47.778855   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:47.778863   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:47.778882   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:47.778890   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:47.778896   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:47.778902   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:47.778908   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:47.778916   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:47.778923   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:47.778930   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:47.778938   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:47.778953   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:47.778964   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:47.778976   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:47.778984   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:47.778992   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:47.779001   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:47.779010   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:49.780889   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 12
	I1204 16:06:49.780905   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:49.780977   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:49.782186   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:49.782365   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:49.782376   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:49.782388   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:49.782394   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:49.782400   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:49.782407   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:49.782432   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:49.782444   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:49.782451   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:49.782459   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:49.782465   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:49.782472   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:49.782479   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:49.782486   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:49.782498   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:49.782510   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:49.782518   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:49.782538   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:49.782556   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:51.782609   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 13
	I1204 16:06:51.782622   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:51.782757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:51.783746   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:51.783816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:51.783826   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:51.783834   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:51.783844   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:51.783851   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:51.783856   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:51.783868   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:51.783878   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:51.783887   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:51.783897   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:51.783913   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:51.783923   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:51.783931   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:51.783937   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:51.783943   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:51.783950   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:51.783955   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:51.783961   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:51.783979   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:53.784482   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 14
	I1204 16:06:53.784495   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:53.784576   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:53.785564   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:53.785649   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:53.785672   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:53.785705   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:53.785710   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:53.785716   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:53.785722   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:53.785728   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:53.785736   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:53.785743   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:53.785749   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:53.785755   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:53.785762   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:53.785778   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:53.785791   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:53.785807   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:53.785818   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:53.785827   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:53.785836   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:53.785844   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:55.787661   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 15
	I1204 16:06:55.787677   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:55.787728   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:55.788910   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:55.788980   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:55.788990   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:55.788998   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:55.789003   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:55.789011   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:55.789022   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:55.789033   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:55.789044   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:55.789053   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:55.789069   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:55.789078   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:55.789090   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:55.789098   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:55.789105   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:55.789112   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:55.789127   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:55.789142   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:55.789150   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:55.789157   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:57.790655   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 16
	I1204 16:06:57.790670   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:57.790719   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:57.791711   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:57.791778   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:57.791786   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:57.791795   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:57.791808   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:57.791838   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:57.791851   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:57.791866   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:57.791874   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:57.791881   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:57.791888   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:57.791900   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:57.791909   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:57.791917   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:57.791924   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:57.791931   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:57.791937   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:57.791945   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:57.791952   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:57.791958   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:59.794019   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 17
	I1204 16:06:59.794035   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:59.794070   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:06:59.795179   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:06:59.795290   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:59.795302   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:59.795309   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:59.795314   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:59.795344   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:59.795357   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:59.795366   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:59.795373   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:59.795384   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:59.795393   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:59.795400   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:59.795407   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:59.795414   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:59.795423   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:59.795430   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:59.795438   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:59.795444   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:59.795458   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:59.795466   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:01.797580   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 18
	I1204 16:07:01.797608   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:01.797650   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:01.798620   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:01.798716   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:01.798723   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:01.798730   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:01.798735   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:01.798763   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:01.798776   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:01.798783   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:01.798791   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:01.798798   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:01.798805   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:01.798816   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:01.798823   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:01.798830   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:01.798838   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:01.798848   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:01.798858   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:01.798867   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:01.798873   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:01.798880   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:03.800356   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 19
	I1204 16:07:03.800372   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:03.800434   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:03.801407   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:03.801506   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:03.801515   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:03.801523   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:03.801531   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:03.801538   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:03.801543   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:03.801551   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:03.801557   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:03.801563   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:03.801571   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:03.801578   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:03.801586   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:03.801610   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:03.801622   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:03.801635   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:03.801643   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:03.801650   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:03.801659   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:03.801668   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:05.801996   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 20
	I1204 16:07:05.802013   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:05.802055   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:05.803429   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:05.803500   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:05.803510   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:05.803516   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:05.803522   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:05.803536   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:05.803544   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:05.803550   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:05.803556   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:05.803561   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:05.803566   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:05.803573   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:05.803579   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:05.803585   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:05.803590   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:05.803610   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:05.803621   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:05.803627   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:05.803635   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:05.803642   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:07.804308   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 21
	I1204 16:07:07.804323   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:07.804389   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:07.805467   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:07.805564   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:07.805577   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:07.805584   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:07.805592   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:07.805599   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:07.805605   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:07.805612   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:07.805620   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:07.805626   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:07.805632   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:07.805651   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:07.805666   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:07.805678   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:07.805686   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:07.805692   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:07.805700   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:07.805708   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:07.805716   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:07.805724   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:09.807545   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 22
	I1204 16:07:09.807561   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:09.807663   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:09.808639   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:09.808704   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:09.808713   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:09.808725   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:09.808738   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:09.808744   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:09.808753   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:09.808759   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:09.808765   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:09.808779   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:09.808787   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:09.808794   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:09.808801   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:09.808809   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:09.808820   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:09.808830   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:09.808837   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:09.808852   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:09.808864   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:09.808874   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:11.810924   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 23
	I1204 16:07:11.810938   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:11.811012   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:11.812034   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:11.812129   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:11.812139   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:11.812146   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:11.812153   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:11.812161   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:11.812170   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:11.812185   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:11.812201   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:11.812208   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:11.812214   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:11.812226   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:11.812244   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:11.812254   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:11.812262   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:11.812277   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:11.812284   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:11.812292   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:11.812300   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:11.812308   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:13.812633   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 24
	I1204 16:07:13.812648   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:13.812693   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:13.813674   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:13.813757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:13.813767   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:13.813785   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:13.813799   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:13.813806   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:13.813814   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:13.813821   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:13.813831   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:13.813838   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:13.813844   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:13.813852   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:13.813869   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:13.813882   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:13.813890   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:13.813898   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:13.813908   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:13.813915   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:13.813924   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:13.813931   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:15.814299   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 25
	I1204 16:07:15.814317   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:15.814440   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:15.815457   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:15.815533   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:15.815542   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:15.815550   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:15.815555   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:15.815561   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:15.815566   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:15.815573   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:15.815578   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:15.815605   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:15.815617   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:15.815624   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:15.815632   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:15.815646   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:15.815656   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:15.815663   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:15.815671   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:15.815701   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:15.815714   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:15.815723   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:17.816757   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 26
	I1204 16:07:17.816769   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:17.816845   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:17.817865   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:17.817948   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:17.817961   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:17.817981   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:17.817992   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:17.818002   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:17.818010   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:17.818016   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:17.818023   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:17.818029   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:17.818041   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:17.818054   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:17.818063   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:17.818069   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:17.818079   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:17.818087   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:17.818093   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:17.818099   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:17.818105   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:17.818119   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:19.820216   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 27
	I1204 16:07:19.820227   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:19.820268   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:19.821324   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:19.821434   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:19.821445   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:19.821452   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:19.821460   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:19.821466   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:19.821474   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:19.821480   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:19.821486   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:19.821498   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:19.821510   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:19.821519   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:19.821527   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:19.821534   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:19.821542   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:19.821556   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:19.821576   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:19.821590   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:19.821603   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:19.821611   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:21.823647   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 28
	I1204 16:07:21.823661   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:21.823722   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:21.824789   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:21.824872   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:21.824884   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:21.824904   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:21.824913   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:21.824920   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:21.824927   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:21.824935   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:21.824944   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:21.824955   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:21.824976   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:21.824991   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:21.825004   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:21.825015   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:21.825024   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:21.825032   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:21.825040   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:21.825051   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:21.825058   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:21.825066   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:23.825718   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Attempt 29
	I1204 16:07:23.825732   22192 main.go:141] libmachine: (offline-docker-182000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:23.825775   22192 main.go:141] libmachine: (offline-docker-182000) DBG | hyperkit pid from json: 22406
	I1204 16:07:23.826818   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Searching for 9a:6b:88:99:80:0f in /var/db/dhcpd_leases ...
	I1204 16:07:23.826944   22192 main.go:141] libmachine: (offline-docker-182000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:23.826962   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:23.826970   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:23.826975   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:23.826982   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:23.826996   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:23.827004   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:23.827011   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:23.827018   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:23.827026   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:23.827033   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:23.827040   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:23.827062   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:23.827071   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:23.827086   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:23.827099   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:23.827117   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:23.827129   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:23.827143   22192 main.go:141] libmachine: (offline-docker-182000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:25.829219   22192 client.go:171] duration metric: took 1m0.860436492s to LocalClient.Create
	I1204 16:07:27.830982   22192 start.go:128] duration metric: took 1m2.895830861s to createHost
	I1204 16:07:27.830997   22192 start.go:83] releasing machines lock for "offline-docker-182000", held for 1m2.895920329s
	W1204 16:07:27.831093   22192 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-182000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:6b:88:99:80:0f
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-182000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:6b:88:99:80:0f
	I1204 16:07:27.894285   22192 out.go:201] 
	W1204 16:07:27.915361   22192 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:6b:88:99:80:0f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:6b:88:99:80:0f
	W1204 16:07:27.915375   22192 out.go:270] * 
	* 
	W1204 16:07:27.916070   22192 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 16:07:27.978313   22192 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-182000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-04 16:07:28.091951 -0800 PST m=+3298.783883377
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-182000 -n offline-docker-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-182000 -n offline-docker-182000: exit status 7 (103.848335ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:07:28.193361   22432 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:07:28.193384   22432 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-182000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-182000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-182000: (5.291091061s)
--- FAIL: TestOffline (195.23s)

                                                
                                    
x
+
TestCertOptions (252.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E1204 16:14:01.931605   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:14:29.642846   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.248152425s)

                                                
                                                
-- stdout --
	* [cert-options-111000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-111000" primary control-plane node in "cert-options-111000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-111000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:1d:b4:5d:a5:8f
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-111000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:90:de:d7:9a:ee
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:90:de:d7:9a:ee
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-111000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-111000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-111000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (182.291796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-111000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-111000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-111000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-111000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-111000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (180.097064ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-111000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-111000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-111000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-04 16:16:55.674547 -0800 PST m=+3866.410940033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-111000 -n cert-options-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-111000 -n cert-options-111000: exit status 7 (101.153176ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:16:55.773653   22790 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:16:55.773676   22790 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-111000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-111000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-111000: (5.264053792s)
--- FAIL: TestCertOptions (252.03s)

                                                
                                    
x
+
TestCertExpiration (1770.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1204 16:11:45.799551   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.585264571s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-334000" primary control-plane node in "cert-expiration-334000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-334000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:7b:c1:54:80:f2
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-334000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:fe:1e:63:4c:8d
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:fe:1e:63:4c:8d
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E1204 16:15:53.908501   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:16:36.619835   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1204 16:19:01.931646   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (22m18.408073691s)

                                                
                                                
-- stdout --
	* [cert-expiration-334000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-334000" primary control-plane node in "cert-expiration-334000" cluster
	* Updating the running hyperkit "cert-expiration-334000" VM ...
	* Updating the running hyperkit "cert-expiration-334000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-334000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-334000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-334000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-334000" primary control-plane node in "cert-expiration-334000" cluster
	* Updating the running hyperkit "cert-expiration-334000" VM ...
	* Updating the running hyperkit "cert-expiration-334000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-334000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-04 16:41:10.623066 -0800 PST m=+5321.372583469
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-334000 -n cert-expiration-334000: exit status 7 (103.831004ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:41:10.724068   24663 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:41:10.724089   24663 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-334000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-334000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-334000: (5.299517708s)
--- FAIL: TestCertExpiration (1770.40s)

                                                
                                    
x
+
TestDockerFlags (252.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-718000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E1204 16:09:01.933223   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:01.940352   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:01.952771   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:01.974982   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:02.016562   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:02.098166   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:02.259627   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:02.580811   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:03.222855   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:04.506079   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:07.067521   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:12.190455   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:22.433232   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:09:42.914250   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:10:23.877561   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:10:53.907495   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:11:19.708191   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:11:36.620319   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-718000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.534052044s)

                                                
                                                
-- stdout --
	* [docker-flags-718000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-718000" primary control-plane node in "docker-flags-718000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-718000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 16:08:36.735720   22496 out.go:345] Setting OutFile to fd 1 ...
	I1204 16:08:36.735931   22496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:08:36.735936   22496 out.go:358] Setting ErrFile to fd 2...
	I1204 16:08:36.735940   22496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:08:36.736124   22496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 16:08:36.737813   22496 out.go:352] Setting JSON to false
	I1204 16:08:36.766527   22496 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7686,"bootTime":1733349630,"procs":555,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 16:08:36.766675   22496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 16:08:36.790678   22496 out.go:177] * [docker-flags-718000] minikube v1.34.0 on Darwin 15.0.1
	I1204 16:08:36.833875   22496 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 16:08:36.833907   22496 notify.go:220] Checking for updates...
	I1204 16:08:36.876257   22496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 16:08:36.896428   22496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 16:08:36.918467   22496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 16:08:36.939202   22496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:08:36.959380   22496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 16:08:36.980848   22496 config.go:182] Loaded profile config "force-systemd-flag-492000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 16:08:36.980941   22496 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 16:08:37.012236   22496 out.go:177] * Using the hyperkit driver based on user configuration
	I1204 16:08:37.053291   22496 start.go:297] selected driver: hyperkit
	I1204 16:08:37.053307   22496 start.go:901] validating driver "hyperkit" against <nil>
	I1204 16:08:37.053328   22496 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 16:08:37.058745   22496 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:08:37.058881   22496 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 16:08:37.069811   22496 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 16:08:37.076403   22496 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:08:37.076438   22496 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 16:08:37.076468   22496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 16:08:37.076716   22496 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1204 16:08:37.076751   22496 cni.go:84] Creating CNI manager for ""
	I1204 16:08:37.076788   22496 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 16:08:37.076794   22496 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 16:08:37.076857   22496 start.go:340] cluster config:
	{Name:docker-flags-718000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 16:08:37.076942   22496 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:08:37.098136   22496 out.go:177] * Starting "docker-flags-718000" primary control-plane node in "docker-flags-718000" cluster
	I1204 16:08:37.139223   22496 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 16:08:37.139266   22496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 16:08:37.139279   22496 cache.go:56] Caching tarball of preloaded images
	I1204 16:08:37.139404   22496 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 16:08:37.139413   22496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 16:08:37.139493   22496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/docker-flags-718000/config.json ...
	I1204 16:08:37.139512   22496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/docker-flags-718000/config.json: {Name:mk1d5a3882107a98e5be28f804c78bb4ea09f105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 16:08:37.139875   22496 start.go:360] acquireMachinesLock for docker-flags-718000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:09:33.868162   22496 start.go:364] duration metric: took 56.741140331s to acquireMachinesLock for "docker-flags-718000"
	I1204 16:09:33.868203   22496 start.go:93] Provisioning new machine with config: &{Name:docker-flags-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:09:33.868272   22496 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:09:33.889607   22496 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:09:33.889815   22496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:09:33.889844   22496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:09:33.901168   22496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60602
	I1204 16:09:33.901516   22496 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:09:33.902013   22496 main.go:141] libmachine: Using API Version  1
	I1204 16:09:33.902023   22496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:09:33.902302   22496 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:09:33.902522   22496 main.go:141] libmachine: (docker-flags-718000) Calling .GetMachineName
	I1204 16:09:33.902688   22496 main.go:141] libmachine: (docker-flags-718000) Calling .DriverName
	I1204 16:09:33.902799   22496 start.go:159] libmachine.API.Create for "docker-flags-718000" (driver="hyperkit")
	I1204 16:09:33.902826   22496 client.go:168] LocalClient.Create starting
	I1204 16:09:33.902858   22496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:09:33.902927   22496 main.go:141] libmachine: Decoding PEM data...
	I1204 16:09:33.902945   22496 main.go:141] libmachine: Parsing certificate...
	I1204 16:09:33.903007   22496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:09:33.903053   22496 main.go:141] libmachine: Decoding PEM data...
	I1204 16:09:33.903064   22496 main.go:141] libmachine: Parsing certificate...
	I1204 16:09:33.903076   22496 main.go:141] libmachine: Running pre-create checks...
	I1204 16:09:33.903083   22496 main.go:141] libmachine: (docker-flags-718000) Calling .PreCreateCheck
	I1204 16:09:33.903180   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.903323   22496 main.go:141] libmachine: (docker-flags-718000) Calling .GetConfigRaw
	I1204 16:09:33.952323   22496 main.go:141] libmachine: Creating machine...
	I1204 16:09:33.952346   22496 main.go:141] libmachine: (docker-flags-718000) Calling .Create
	I1204 16:09:33.952474   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.952661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:09:33.952441   22526 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:09:33.952737   22496 main.go:141] libmachine: (docker-flags-718000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:09:34.181917   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:09:34.181799   22526 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/id_rsa...
	I1204 16:09:34.241493   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:09:34.241412   22526 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk...
	I1204 16:09:34.241505   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Writing magic tar header
	I1204 16:09:34.241517   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Writing SSH key tar header
	I1204 16:09:34.242101   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:09:34.242058   22526 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000 ...
	I1204 16:09:34.622764   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:34.622783   22496 main.go:141] libmachine: (docker-flags-718000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid
	I1204 16:09:34.622814   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Using UUID ffe2928c-124d-42de-a0a2-a11e2c4a244a
	I1204 16:09:34.647981   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Generated MAC 62:54:2c:74:f2:ca
	I1204 16:09:34.647997   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000
	I1204 16:09:34.648035   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ffe2928c-124d-42de-a0a2-a11e2c4a244a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1204 16:09:34.648067   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ffe2928c-124d-42de-a0a2-a11e2c4a244a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1204 16:09:34.648110   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ffe2928c-124d-42de-a0a2-a11e2c4a244a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage,/Users/jen
kins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000"}
	I1204 16:09:34.648152   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ffe2928c-124d-42de-a0a2-a11e2c4a244a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docke
r-flags-718000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000"
	I1204 16:09:34.648218   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:09:34.651164   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 DEBUG: hyperkit: Pid is 22527
	I1204 16:09:34.651609   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 0
	I1204 16:09:34.651630   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:34.651731   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:34.652872   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:34.652966   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:34.652981   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:34.653012   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:34.653029   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:34.653069   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:34.653085   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:34.653098   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:34.653109   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:34.653124   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:34.653138   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:34.653149   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:34.653160   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:34.653168   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:34.653176   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:34.653194   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:34.653207   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:34.653215   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:34.653223   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:34.653245   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:34.661474   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:09:34.670106   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:09:34.671094   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:09:34.671110   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:09:34.671122   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:09:34.671138   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:09:35.055115   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:09:35.055130   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:09:35.169820   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:09:35.169844   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:09:35.169903   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:09:35.169930   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:09:35.170703   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:09:35.170714   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:09:36.654246   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 1
	I1204 16:09:36.654260   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:36.654330   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:36.655347   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:36.655408   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:36.655419   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:36.655435   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:36.655445   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:36.655451   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:36.655457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:36.655466   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:36.655485   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:36.655497   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:36.655506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:36.655528   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:36.655535   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:36.655544   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:36.655550   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:36.655577   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:36.655586   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:36.655595   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:36.655601   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:36.655606   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:38.656302   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 2
	I1204 16:09:38.656318   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:38.656347   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:38.657356   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:38.657440   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:38.657450   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:38.657459   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:38.657465   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:38.657472   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:38.657482   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:38.657489   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:38.657496   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:38.657510   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:38.657523   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:38.657531   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:38.657546   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:38.657563   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:38.657574   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:38.657583   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:38.657591   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:38.657597   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:38.657612   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:38.657620   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:40.523765   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:09:40.523882   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:09:40.523898   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:09:40.543069   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:09:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:09:40.659748   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 3
	I1204 16:09:40.659776   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:40.659998   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:40.661805   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:40.662007   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:40.662020   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:40.662029   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:40.662036   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:40.662059   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:40.662075   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:40.662102   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:40.662120   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:40.662130   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:40.662150   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:40.662172   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:40.662182   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:40.662200   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:40.662211   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:40.662230   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:40.662248   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:40.662258   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:40.662271   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:40.662285   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:42.663762   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 4
	I1204 16:09:42.663782   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:42.663877   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:42.664848   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:42.664942   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:42.664952   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:42.664961   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:42.664968   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:42.664978   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:42.664986   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:42.664992   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:42.664998   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:42.665009   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:42.665022   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:42.665037   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:42.665050   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:42.665060   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:42.665067   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:42.665074   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:42.665093   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:42.665099   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:42.665105   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:42.665113   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:44.666521   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 5
	I1204 16:09:44.666540   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:44.666602   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:44.667561   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:44.667664   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:44.667676   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:44.667704   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:44.667713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:44.667725   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:44.667734   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:44.667754   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:44.667768   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:44.667781   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:44.667810   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:44.667823   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:44.667830   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:44.667839   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:44.667846   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:44.667853   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:44.667878   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:44.667893   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:44.667905   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:44.667918   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:46.668238   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 6
	I1204 16:09:46.668253   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:46.668350   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:46.669425   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:46.669556   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:46.669565   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:46.669571   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:46.669582   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:46.669602   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:46.669620   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:46.669628   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:46.669633   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:46.669650   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:46.669661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:46.669677   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:46.669691   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:46.669710   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:46.669723   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:46.669730   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:46.669736   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:46.669745   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:46.669754   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:46.669763   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:48.669747   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 7
	I1204 16:09:48.669761   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:48.669813   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:48.670799   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:48.670868   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:48.670878   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:48.670894   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:48.670901   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:48.670908   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:48.670914   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:48.670927   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:48.670940   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:48.670952   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:48.670969   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:48.670977   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:48.670982   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:48.670998   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:48.671009   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:48.671020   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:48.671030   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:48.671036   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:48.671044   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:48.671054   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:50.672871   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 8
	I1204 16:09:50.672897   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:50.672944   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:50.673972   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:50.674162   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:50.674169   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:50.674177   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:50.674184   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:50.674190   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:50.674196   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:50.674202   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:50.674208   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:50.674220   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:50.674226   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:50.674231   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:50.674237   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:50.674248   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:50.674261   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:50.674268   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:50.674284   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:50.674298   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:50.674311   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:50.674324   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:52.676204   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 9
	I1204 16:09:52.676218   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:52.676271   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:52.677239   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:52.677330   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:52.677341   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:52.677359   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:52.677365   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:52.677372   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:52.677382   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:52.677401   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:52.677413   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:52.677423   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:52.677431   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:52.677438   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:52.677452   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:52.677460   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:52.677467   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:52.677474   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:52.677486   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:52.677498   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:52.677505   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:52.677510   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:54.679217   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 10
	I1204 16:09:54.679239   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:54.679292   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:54.680278   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:54.680354   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:54.680364   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:54.680373   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:54.680380   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:54.680387   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:54.680394   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:54.680401   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:54.680407   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:54.680425   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:54.680446   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:54.680455   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:54.680471   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:54.680487   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:54.680498   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:54.680506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:54.680512   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:54.680526   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:54.680538   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:54.680547   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:56.682517   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 11
	I1204 16:09:56.682529   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:56.682590   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:56.683556   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:56.683647   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:56.683658   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:56.683666   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:56.683672   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:56.683678   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:56.683683   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:56.683693   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:56.683699   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:56.683705   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:56.683720   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:56.683733   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:56.683752   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:56.683760   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:56.683767   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:56.683772   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:56.683792   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:56.683802   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:56.683809   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:56.683819   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:58.685791   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 12
	I1204 16:09:58.685804   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:58.685921   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:09:58.686884   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:09:58.686976   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:58.686986   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:58.686995   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:58.687015   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:58.687029   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:58.687035   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:58.687042   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:58.687048   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:58.687055   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:58.687062   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:58.687079   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:58.687091   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:58.687106   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:58.687114   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:58.687121   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:58.687128   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:58.687135   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:58.687141   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:58.687151   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:00.689388   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 13
	I1204 16:10:00.689400   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:00.689452   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:00.690448   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:00.690513   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:00.690531   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:00.690571   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:00.690584   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:00.690597   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:00.690605   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:00.690615   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:00.690620   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:00.690646   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:00.690660   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:00.690668   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:00.690676   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:00.690689   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:00.690698   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:00.690705   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:00.690713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:00.690719   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:00.690728   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:00.690748   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:02.692784   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 14
	I1204 16:10:02.692799   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:02.692879   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:02.693858   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:02.693982   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:02.693990   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:02.694000   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:02.694005   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:02.694018   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:02.694026   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:02.694041   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:02.694065   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:02.694078   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:02.694086   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:02.694097   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:02.694106   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:02.694112   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:02.694120   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:02.694126   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:02.694135   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:02.694149   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:02.694161   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:02.694191   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:04.696173   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 15
	I1204 16:10:04.696188   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:04.696240   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:04.697263   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:04.697369   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:04.697379   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:04.697386   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:04.697395   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:04.697403   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:04.697408   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:04.697421   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:04.697435   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:04.697444   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:04.697457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:04.697470   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:04.697478   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:04.697484   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:04.697492   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:04.697499   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:04.697509   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:04.697515   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:04.697522   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:04.697530   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:06.697553   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 16
	I1204 16:10:06.697565   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:06.697658   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:06.698661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:06.698753   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:06.698765   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:06.698784   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:06.698790   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:06.698797   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:06.698805   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:06.698811   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:06.698817   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:06.698823   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:06.698828   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:06.698836   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:06.698841   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:06.698855   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:06.698868   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:06.698876   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:06.698884   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:06.698898   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:06.698907   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:06.698916   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:08.700933   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 17
	I1204 16:10:08.700947   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:08.701006   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:08.702157   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:08.702261   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:08.702269   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:08.702275   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:08.702280   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:08.702297   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:08.702303   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:08.702309   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:08.702314   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:08.702322   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:08.702333   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:08.702341   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:08.702347   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:08.702354   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:08.702363   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:08.702370   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:08.702378   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:08.702386   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:08.702393   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:08.702401   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:10.702955   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 18
	I1204 16:10:10.702971   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:10.702981   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:10.704135   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:10.704189   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:10.704198   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:10.704213   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:10.704222   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:10.704229   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:10.704235   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:10.704250   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:10.704262   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:10.704269   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:10.704285   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:10.704294   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:10.704302   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:10.704309   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:10.704319   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:10.704325   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:10.704352   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:10.704360   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:10.704366   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:10.704374   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:12.705581   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 19
	I1204 16:10:12.705596   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:12.705636   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:12.706643   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:12.706726   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:12.706737   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:12.706745   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:12.706750   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:12.706772   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:12.706781   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:12.706788   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:12.706797   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:12.706803   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:12.706811   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:12.706819   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:12.706826   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:12.706840   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:12.706848   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:12.706855   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:12.706861   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:12.706867   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:12.706876   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:12.706884   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:14.708829   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 20
	I1204 16:10:14.708844   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:14.708916   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:14.709906   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:14.709978   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:14.709986   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:14.710000   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:14.710011   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:14.710022   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:14.710029   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:14.710037   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:14.710043   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:14.710065   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:14.710075   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:14.710084   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:14.710091   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:14.710104   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:14.710112   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:14.710119   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:14.710126   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:14.710132   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:14.710140   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:14.710156   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:16.711862   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 21
	I1204 16:10:16.711875   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:16.711948   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:16.712930   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:16.713046   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:16.713058   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:16.713074   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:16.713083   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:16.713089   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:16.713097   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:16.713104   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:16.713111   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:16.713117   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:16.713129   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:16.713138   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:16.713145   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:16.713161   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:16.713174   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:16.713183   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:16.713188   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:16.713203   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:16.713217   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:16.713227   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:18.715234   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 22
	I1204 16:10:18.715246   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:18.715293   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:18.716315   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:18.716443   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:18.716453   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:18.716462   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:18.716467   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:18.716473   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:18.716483   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:18.716491   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:18.716499   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:18.716506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:18.716511   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:18.716517   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:18.716523   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:18.716529   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:18.716537   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:18.716544   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:18.716554   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:18.716562   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:18.716577   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:18.716589   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:20.717457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 23
	I1204 16:10:20.717471   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:20.717540   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:20.718569   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:20.718713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:20.718722   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:20.718729   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:20.718734   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:20.718746   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:20.718752   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:20.718758   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:20.718764   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:20.718770   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:20.718776   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:20.718782   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:20.718790   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:20.718804   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:20.718818   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:20.718838   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:20.718846   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:20.718859   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:20.718871   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:20.718881   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:22.720376   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 24
	I1204 16:10:22.720389   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:22.720464   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:22.721463   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:22.721564   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:22.721573   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:22.721588   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:22.721603   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:22.721612   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:22.721618   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:22.721624   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:22.721630   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:22.721636   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:22.721642   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:22.721650   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:22.721656   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:22.721663   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:22.721670   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:22.721677   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:22.721691   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:22.721703   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:22.721728   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:22.721740   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:24.722862   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 25
	I1204 16:10:24.722877   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:24.722987   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:24.724233   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:24.724326   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:24.724336   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:24.724345   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:24.724351   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:24.724357   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:24.724364   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:24.724369   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:24.724385   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:24.724392   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:24.724400   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:24.724424   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:24.724436   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:24.724443   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:24.724450   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:24.724464   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:24.724483   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:24.724490   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:24.724497   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:24.724508   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:26.725889   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 26
	I1204 16:10:26.725901   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:26.725973   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:26.726977   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:26.727102   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:26.727118   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:26.727131   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:26.727151   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:26.727164   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:26.727173   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:26.727183   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:26.727191   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:26.727198   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:26.727204   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:26.727210   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:26.727224   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:26.727231   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:26.727237   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:26.727242   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:26.727248   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:26.727259   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:26.727269   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:26.727278   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:28.727698   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 27
	I1204 16:10:28.727711   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:28.727753   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:28.728767   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:28.728826   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:28.728840   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:28.728851   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:28.728861   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:28.728892   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:28.728914   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:28.728922   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:28.728928   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:28.728948   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:28.728959   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:28.728966   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:28.728974   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:28.728988   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:28.728999   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:28.729007   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:28.729015   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:28.729021   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:28.729028   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:28.729037   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:30.729239   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 28
	I1204 16:10:30.729255   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:30.729341   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:30.730333   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:30.730433   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:30.730442   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:30.730449   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:30.730457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:30.730467   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:30.730474   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:30.730480   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:30.730500   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:30.730506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:30.730512   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:30.730526   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:30.730537   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:30.730553   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:30.730564   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:30.730572   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:30.730580   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:30.730588   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:30.730603   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:30.730617   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:32.732671   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 29
	I1204 16:10:32.732687   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:32.732726   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:32.733757   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 62:54:2c:74:f2:ca in /var/db/dhcpd_leases ...
	I1204 16:10:32.733838   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:32.733848   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:32.733856   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:32.733863   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:32.733882   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:32.733889   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:32.733896   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:32.733910   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:32.733921   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:32.733932   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:32.733939   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:32.733946   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:32.733953   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:32.733960   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:32.733968   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:32.733975   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:32.733982   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:32.733988   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:32.733996   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:34.736030   22496 client.go:171] duration metric: took 1m0.833502753s to LocalClient.Create
	I1204 16:10:36.745662   22496 start.go:128] duration metric: took 1m2.877681957s to createHost
	I1204 16:10:36.745691   22496 start.go:83] releasing machines lock for "docker-flags-718000", held for 1m2.877825294s
	W1204 16:10:36.745709   22496 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:54:2c:74:f2:ca
	I1204 16:10:36.746157   22496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:10:36.746196   22496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:10:36.758269   22496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60604
	I1204 16:10:36.758589   22496 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:10:36.759001   22496 main.go:141] libmachine: Using API Version  1
	I1204 16:10:36.759013   22496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:10:36.759280   22496 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:10:36.759712   22496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:10:36.759753   22496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:10:36.771058   22496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60606
	I1204 16:10:36.771416   22496 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:10:36.771747   22496 main.go:141] libmachine: Using API Version  1
	I1204 16:10:36.771761   22496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:10:36.771968   22496 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:10:36.772086   22496 main.go:141] libmachine: (docker-flags-718000) Calling .GetState
	I1204 16:10:36.772188   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.772252   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:36.773499   22496 main.go:141] libmachine: (docker-flags-718000) Calling .DriverName
	I1204 16:10:36.808880   22496 out.go:177] * Deleting "docker-flags-718000" in hyperkit ...
	I1204 16:10:36.830137   22496 main.go:141] libmachine: (docker-flags-718000) Calling .Remove
	I1204 16:10:36.830258   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.830267   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.830330   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:36.831472   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.831552   22496 main.go:141] libmachine: (docker-flags-718000) DBG | waiting for graceful shutdown
	I1204 16:10:37.833678   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:37.833771   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:37.834954   22496 main.go:141] libmachine: (docker-flags-718000) DBG | waiting for graceful shutdown
	I1204 16:10:38.835379   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:38.835430   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:38.836677   22496 main.go:141] libmachine: (docker-flags-718000) DBG | waiting for graceful shutdown
	I1204 16:10:39.838050   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:39.838130   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:39.838814   22496 main.go:141] libmachine: (docker-flags-718000) DBG | waiting for graceful shutdown
	I1204 16:10:40.839405   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:40.839504   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:40.840719   22496 main.go:141] libmachine: (docker-flags-718000) DBG | waiting for graceful shutdown
	I1204 16:10:41.841016   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:41.841112   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22527
	I1204 16:10:41.841984   22496 main.go:141] libmachine: (docker-flags-718000) DBG | sending sigkill
	I1204 16:10:41.841992   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:41.854574   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:10:41 WARN : hyperkit: failed to read stdout: EOF
	I1204 16:10:41.854595   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:10:41 WARN : hyperkit: failed to read stderr: EOF
	W1204 16:10:41.875027   22496 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:54:2c:74:f2:ca
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:54:2c:74:f2:ca
	I1204 16:10:41.875048   22496 start.go:729] Will try again in 5 seconds ...
	I1204 16:10:46.877097   22496 start.go:360] acquireMachinesLock for docker-flags-718000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:11:39.763705   22496 start.go:364] duration metric: took 52.886536753s to acquireMachinesLock for "docker-flags-718000"
	I1204 16:11:39.763730   22496 start.go:93] Provisioning new machine with config: &{Name:docker-flags-718000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:11:39.763800   22496 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:11:39.806075   22496 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:11:39.806150   22496 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:11:39.806179   22496 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:11:39.817390   22496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60610
	I1204 16:11:39.817734   22496 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:11:39.818114   22496 main.go:141] libmachine: Using API Version  1
	I1204 16:11:39.818128   22496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:11:39.818380   22496 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:11:39.818516   22496 main.go:141] libmachine: (docker-flags-718000) Calling .GetMachineName
	I1204 16:11:39.818625   22496 main.go:141] libmachine: (docker-flags-718000) Calling .DriverName
	I1204 16:11:39.818748   22496 start.go:159] libmachine.API.Create for "docker-flags-718000" (driver="hyperkit")
	I1204 16:11:39.818782   22496 client.go:168] LocalClient.Create starting
	I1204 16:11:39.818812   22496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:11:39.818875   22496 main.go:141] libmachine: Decoding PEM data...
	I1204 16:11:39.818888   22496 main.go:141] libmachine: Parsing certificate...
	I1204 16:11:39.818934   22496 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:11:39.818980   22496 main.go:141] libmachine: Decoding PEM data...
	I1204 16:11:39.818992   22496 main.go:141] libmachine: Parsing certificate...
	I1204 16:11:39.819004   22496 main.go:141] libmachine: Running pre-create checks...
	I1204 16:11:39.819010   22496 main.go:141] libmachine: (docker-flags-718000) Calling .PreCreateCheck
	I1204 16:11:39.819097   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:39.819132   22496 main.go:141] libmachine: (docker-flags-718000) Calling .GetConfigRaw
	I1204 16:11:39.827324   22496 main.go:141] libmachine: Creating machine...
	I1204 16:11:39.827333   22496 main.go:141] libmachine: (docker-flags-718000) Calling .Create
	I1204 16:11:39.827430   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:39.827593   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:11:39.827417   22574 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:11:39.827644   22496 main.go:141] libmachine: (docker-flags-718000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:11:40.274114   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:11:40.274023   22574 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/id_rsa...
	I1204 16:11:40.459442   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:11:40.459373   22574 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk...
	I1204 16:11:40.459454   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Writing magic tar header
	I1204 16:11:40.459466   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Writing SSH key tar header
	I1204 16:11:40.459759   22496 main.go:141] libmachine: (docker-flags-718000) DBG | I1204 16:11:40.459731   22574 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000 ...
	I1204 16:11:40.880116   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:40.880134   22496 main.go:141] libmachine: (docker-flags-718000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid
	I1204 16:11:40.880146   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Using UUID 801b0c04-e3c3-4cd1-a3e5-c5531a27ee12
	I1204 16:11:40.905676   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Generated MAC 32:ea:0b:0d:94:e8
	I1204 16:11:40.905701   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000
	I1204 16:11:40.905742   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"801b0c04-e3c3-4cd1-a3e5-c5531a27ee12", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1204 16:11:40.905776   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"801b0c04-e3c3-4cd1-a3e5-c5531a27ee12", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1204 16:11:40.905842   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "801b0c04-e3c3-4cd1-a3e5-c5531a27ee12", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage,/Users/jen
kins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000"}
	I1204 16:11:40.905891   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 801b0c04-e3c3-4cd1-a3e5-c5531a27ee12 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/docker-flags-718000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docke
r-flags-718000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-718000"
	I1204 16:11:40.905911   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:11:40.908772   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 DEBUG: hyperkit: Pid is 22588
	I1204 16:11:40.909419   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 0
	I1204 16:11:40.909431   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:40.909517   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:40.910631   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:40.910732   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:40.910742   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:40.910751   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:40.910760   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:40.910768   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:40.910773   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:40.910795   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:40.910816   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:40.910830   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:40.910844   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:40.910867   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:40.910881   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:40.910889   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:40.910897   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:40.910904   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:40.910909   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:40.910917   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:40.910922   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:40.910946   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:40.919227   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:11:40.927724   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/docker-flags-718000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:11:40.928605   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:11:40.928635   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:11:40.928649   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:11:40.928662   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:11:41.315921   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:11:41.315940   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:11:41.430601   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:11:41.430624   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:11:41.430656   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:11:41.430670   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:11:41.431447   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:11:41.431457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:11:42.912748   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 1
	I1204 16:11:42.912762   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:42.912844   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:42.913855   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:42.913939   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:42.913947   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:42.913955   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:42.913968   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:42.913975   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:42.913983   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:42.913993   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:42.914000   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:42.914007   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:42.914023   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:42.914036   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:42.914055   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:42.914064   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:42.914071   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:42.914080   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:42.914092   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:42.914102   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:42.914109   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:42.914117   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:44.915527   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 2
	I1204 16:11:44.915543   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:44.915597   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:44.916858   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:44.916943   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:44.916952   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:44.916960   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:44.916965   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:44.916971   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:44.916976   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:44.916982   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:44.916987   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:44.916995   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:44.917000   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:44.917031   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:44.917047   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:44.917055   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:44.917060   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:44.917072   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:44.917086   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:44.917093   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:44.917101   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:44.917120   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:46.807196   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:11:46.807276   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:11:46.807286   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:11:46.826669   22496 main.go:141] libmachine: (docker-flags-718000) DBG | 2024/12/04 16:11:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:11:46.917976   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 3
	I1204 16:11:46.918002   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:46.918213   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:46.920057   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:46.920245   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:46.920266   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:46.920277   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:46.920288   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:46.920300   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:46.920321   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:46.920334   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:46.920354   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:46.920366   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:46.920383   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:46.920392   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:46.920402   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:46.920412   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:46.920423   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:46.920434   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:46.920442   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:46.920467   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:46.920485   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:46.920508   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:48.921614   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 4
	I1204 16:11:48.921628   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:48.921718   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:48.922713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:48.922822   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:48.922834   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:48.922841   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:48.922847   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:48.922858   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:48.922865   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:48.922871   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:48.922887   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:48.922898   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:48.922906   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:48.922911   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:48.922919   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:48.922928   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:48.922935   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:48.922942   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:48.922950   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:48.922963   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:48.922975   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:48.922983   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:50.925019   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 5
	I1204 16:11:50.925032   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:50.925072   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:50.926021   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:50.926107   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:50.926118   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:50.926136   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:50.926143   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:50.926149   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:50.926159   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:50.926167   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:50.926172   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:50.926178   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:50.926184   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:50.926192   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:50.926207   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:50.926220   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:50.926228   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:50.926234   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:50.926245   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:50.926255   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:50.926268   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:50.926281   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:52.928349   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 6
	I1204 16:11:52.928364   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:52.928416   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:52.929432   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:52.929512   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:52.929519   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:52.929529   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:52.929536   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:52.929542   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:52.929549   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:52.929564   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:52.929576   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:52.929587   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:52.929606   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:52.929617   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:52.929626   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:52.929633   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:52.929640   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:52.929656   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:52.929676   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:52.929683   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:52.929689   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:52.929697   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:54.931349   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 7
	I1204 16:11:54.931364   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:54.931408   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:54.932446   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:54.932523   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:54.932542   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:54.932553   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:54.932558   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:54.932582   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:54.932598   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:54.932639   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:54.932662   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:54.932688   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:54.932713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:54.932724   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:54.932734   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:54.932748   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:54.932761   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:54.932769   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:54.932775   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:54.932783   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:54.932790   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:54.932798   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:56.932977   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 8
	I1204 16:11:56.932991   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:56.933049   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:56.934212   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:56.934338   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:56.934368   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:56.934376   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:56.934381   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:56.934397   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:56.934403   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:56.934410   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:56.934416   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:56.934422   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:56.934452   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:56.934473   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:56.934482   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:56.934489   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:56.934496   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:56.934513   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:56.934520   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:56.934526   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:56.934534   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:56.934543   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:58.935988   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 9
	I1204 16:11:58.936003   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:58.936082   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:11:58.937074   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:11:58.937124   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:58.937133   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:58.937141   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:58.937147   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:58.937154   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:58.937162   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:58.937179   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:58.937192   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:58.937202   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:58.937210   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:58.937217   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:58.937223   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:58.937229   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:58.937237   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:58.937243   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:58.937255   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:58.937262   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:58.937269   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:58.937288   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:00.939362   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 10
	I1204 16:12:00.939374   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:00.939444   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:00.940466   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:00.940544   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:00.940555   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:00.940570   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:00.940578   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:00.940588   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:00.940594   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:00.940620   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:00.940632   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:00.940640   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:00.940647   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:00.940655   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:00.940679   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:00.940699   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:00.940706   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:00.940716   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:00.940724   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:00.940731   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:00.940738   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:00.940746   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:02.942414   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 11
	I1204 16:12:02.942427   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:02.942482   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:02.943602   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:02.943688   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:02.943696   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:02.943710   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:02.943716   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:02.943733   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:02.943744   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:02.943752   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:02.943778   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:02.943791   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:02.943798   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:02.943806   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:02.943822   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:02.943833   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:02.943842   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:02.943849   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:02.943888   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:02.943936   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:02.943942   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:02.943952   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:04.944843   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 12
	I1204 16:12:04.944863   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:04.944891   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:04.946490   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:04.946553   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:04.946565   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:04.946573   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:04.946579   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:04.946585   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:04.946591   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:04.946597   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:04.946607   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:04.946621   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:04.946634   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:04.946647   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:04.946661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:04.946670   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:04.946678   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:04.946686   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:04.946693   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:04.946699   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:04.946706   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:04.946713   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:06.948618   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 13
	I1204 16:12:06.948632   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:06.948718   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:06.949708   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:06.949794   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:06.949802   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:06.949810   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:06.949817   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:06.949837   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:06.949860   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:06.949874   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:06.949888   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:06.949898   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:06.949906   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:06.949913   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:06.949927   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:06.949935   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:06.949947   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:06.949976   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:06.950008   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:06.950014   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:06.950020   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:06.950027   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:08.950012   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 14
	I1204 16:12:08.950027   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:08.950097   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:08.951218   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:08.951317   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:08.951325   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:08.951333   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:08.951343   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:08.951364   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:08.951371   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:08.951378   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:08.951397   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:08.951408   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:08.951416   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:08.951421   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:08.951433   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:08.951445   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:08.951454   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:08.951461   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:08.951468   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:08.951487   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:08.951493   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:08.951501   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:10.953519   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 15
	I1204 16:12:10.953533   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:10.953597   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:10.954612   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:10.954702   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:10.954712   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:10.954719   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:10.954724   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:10.954751   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:10.954766   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:10.954794   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:10.954830   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:10.954838   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:10.954848   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:10.954867   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:10.954878   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:10.954886   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:10.954894   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:10.954901   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:10.954908   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:10.954922   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:10.954936   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:10.954945   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:12.956109   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 16
	I1204 16:12:12.956129   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:12.956190   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:12.957177   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:12.957223   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:12.957230   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:12.957238   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:12.957246   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:12.957253   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:12.957258   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:12.957275   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:12.957306   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:12.957317   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:12.957323   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:12.957330   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:12.957339   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:12.957349   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:12.957357   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:12.957365   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:12.957373   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:12.957389   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:12.957402   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:12.957411   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:14.959406   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 17
	I1204 16:12:14.959420   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:14.959464   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:14.960518   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:14.960610   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:14.960620   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:14.960631   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:14.960645   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:14.960654   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:14.960661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:14.960672   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:14.960685   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:14.960696   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:14.960703   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:14.960709   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:14.960718   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:14.960725   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:14.960735   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:14.960742   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:14.960749   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:14.960756   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:14.960763   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:14.960770   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:16.962808   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 18
	I1204 16:12:16.962821   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:16.962876   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:16.964006   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:16.964119   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:16.964127   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:16.964135   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:16.964141   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:16.964155   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:16.964167   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:16.964173   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:16.964182   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:16.964189   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:16.964195   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:16.964205   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:16.964212   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:16.964226   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:16.964232   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:16.964238   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:16.964245   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:16.964252   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:16.964257   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:16.964270   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:18.965171   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 19
	I1204 16:12:18.965185   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:18.965273   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:18.966260   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:18.966335   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:18.966346   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:18.966354   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:18.966359   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:18.966372   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:18.966380   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:18.966386   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:18.966393   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:18.966400   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:18.966405   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:18.966419   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:18.966432   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:18.966442   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:18.966457   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:18.966471   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:18.966479   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:18.966486   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:18.966495   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:18.966505   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:20.967332   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 20
	I1204 16:12:20.967344   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:20.967405   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:20.968418   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:20.968505   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:20.968514   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:20.968524   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:20.968531   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:20.968538   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:20.968555   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:20.968562   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:20.968570   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:20.968578   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:20.968586   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:20.968599   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:20.968610   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:20.968617   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:20.968631   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:20.968644   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:20.968660   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:20.968667   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:20.968673   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:20.968680   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:22.970725   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 21
	I1204 16:12:22.970739   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:22.970796   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:22.971758   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:22.971888   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:22.971899   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:22.971907   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:22.971917   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:22.971924   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:22.971930   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:22.971938   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:22.971949   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:22.971957   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:22.971964   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:22.971986   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:22.972003   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:22.972016   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:22.972023   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:22.972031   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:22.972037   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:22.972043   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:22.972054   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:22.972063   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:24.972664   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 22
	I1204 16:12:24.972678   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:24.972745   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:24.973724   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:24.973822   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:24.973833   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:24.973841   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:24.973846   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:24.973855   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:24.973860   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:24.973866   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:24.973872   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:24.973878   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:24.973884   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:24.973902   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:24.973908   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:24.973915   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:24.973932   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:24.973943   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:24.973951   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:24.973958   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:24.973966   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:24.973979   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:26.975307   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 23
	I1204 16:12:26.975322   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:26.975381   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:26.976360   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:26.976443   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:26.976470   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:26.976491   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:26.976499   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:26.976506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:26.976514   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:26.976520   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:26.976528   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:26.976549   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:26.976562   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:26.976570   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:26.976576   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:26.976582   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:26.976589   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:26.976610   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:26.976625   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:26.976632   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:26.976640   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:26.976649   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:28.978291   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 24
	I1204 16:12:28.978306   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:28.978372   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:28.979475   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:28.979592   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:28.979602   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:28.979609   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:28.979616   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:28.979634   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:28.979646   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:28.979654   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:28.979662   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:28.979676   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:28.979689   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:28.979697   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:28.979705   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:28.979716   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:28.979723   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:28.979736   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:28.979748   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:28.979756   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:28.979763   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:28.979779   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:30.981266   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 25
	I1204 16:12:30.981281   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:30.981344   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:30.982699   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:30.982775   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:30.982782   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:30.982790   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:30.982803   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:30.982812   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:30.982821   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:30.982826   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:30.982837   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:30.982846   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:30.982852   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:30.982860   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:30.982866   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:30.982889   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:30.982909   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:30.982920   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:30.982935   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:30.982945   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:30.982952   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:30.982962   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:32.984448   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 26
	I1204 16:12:32.984463   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:32.984533   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:32.985510   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:32.985584   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:32.985593   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:32.985608   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:32.985618   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:32.985624   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:32.985630   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:32.985636   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:32.985646   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:32.985655   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:32.985664   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:32.985672   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:32.985688   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:32.985704   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:32.985715   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:32.985723   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:32.985738   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:32.985749   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:32.985758   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:32.985766   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:34.987876   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 27
	I1204 16:12:34.987892   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:34.987917   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:34.988937   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:34.989028   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:34.989058   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:34.989067   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:34.989072   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:34.989078   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:34.989085   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:34.989102   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:34.989114   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:34.989126   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:34.989132   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:34.989142   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:34.989148   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:34.989154   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:34.989161   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:34.989168   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:34.989175   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:34.989181   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:34.989186   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:34.989194   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:36.991286   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 28
	I1204 16:12:36.991304   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:36.991358   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:36.992375   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:36.992463   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:36.992471   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:36.992478   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:36.992487   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:36.992497   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:36.992506   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:36.992513   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:36.992519   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:36.992543   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:36.992556   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:36.992564   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:36.992572   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:36.992579   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:36.992585   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:36.992599   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:36.992613   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:36.992622   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:36.992630   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:36.992638   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:38.994661   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Attempt 29
	I1204 16:12:38.994678   22496 main.go:141] libmachine: (docker-flags-718000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:12:38.994726   22496 main.go:141] libmachine: (docker-flags-718000) DBG | hyperkit pid from json: 22588
	I1204 16:12:38.995705   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Searching for 32:ea:0b:0d:94:e8 in /var/db/dhcpd_leases ...
	I1204 16:12:38.995793   22496 main.go:141] libmachine: (docker-flags-718000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:12:38.995802   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:12:38.995810   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:12:38.995815   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:12:38.995822   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:12:38.995828   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:12:38.995834   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:12:38.995840   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:12:38.995847   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:12:38.995855   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:12:38.995870   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:12:38.995882   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:12:38.995901   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:12:38.995909   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:12:38.995922   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:12:38.995930   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:12:38.995936   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:12:38.995942   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:12:38.995950   22496 main.go:141] libmachine: (docker-flags-718000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:12:40.997501   22496 client.go:171] duration metric: took 1m1.178688333s to LocalClient.Create
	I1204 16:12:42.998878   22496 start.go:128] duration metric: took 1m3.2350464s to createHost
	I1204 16:12:42.998894   22496 start.go:83] releasing machines lock for "docker-flags-718000", held for 1m3.235155051s
	W1204 16:12:42.998977   22496 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-718000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ea:0b:0d:94:e8
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-718000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ea:0b:0d:94:e8
	I1204 16:12:43.062202   22496 out.go:201] 
	W1204 16:12:43.083215   22496 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ea:0b:0d:94:e8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:ea:0b:0d:94:e8
	W1204 16:12:43.083228   22496 out.go:270] * 
	* 
	W1204 16:12:43.083850   22496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 16:12:43.146166   22496 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-718000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-718000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-718000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (203.710052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-718000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-718000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-718000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-718000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (185.923825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-718000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-718000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-718000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-04 16:12:43.649797 -0800 PST m=+3614.386278467
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-718000 -n docker-flags-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-718000 -n docker-flags-718000: exit status 7 (99.667241ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:12:43.746890   22629 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:12:43.746917   22629 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-718000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-718000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-718000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-718000: (5.263302345s)
--- FAIL: TestDockerFlags (252.36s)

                                                
                                    
x
+
TestForceSystemdFlag (252.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-492000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-492000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.513409324s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-492000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-492000" primary control-plane node in "force-systemd-flag-492000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-492000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 16:07:33.553769   22443 out.go:345] Setting OutFile to fd 1 ...
	I1204 16:07:33.553976   22443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:07:33.553982   22443 out.go:358] Setting ErrFile to fd 2...
	I1204 16:07:33.553986   22443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:07:33.554192   22443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 16:07:33.555779   22443 out.go:352] Setting JSON to false
	I1204 16:07:33.584132   22443 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7623,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 16:07:33.584293   22443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 16:07:33.607255   22443 out.go:177] * [force-systemd-flag-492000] minikube v1.34.0 on Darwin 15.0.1
	I1204 16:07:33.648119   22443 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 16:07:33.648168   22443 notify.go:220] Checking for updates...
	I1204 16:07:33.695780   22443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 16:07:33.716609   22443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 16:07:33.737878   22443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 16:07:33.758821   22443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:07:33.779631   22443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 16:07:33.801186   22443 config.go:182] Loaded profile config "force-systemd-env-608000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 16:07:33.801285   22443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 16:07:33.832818   22443 out.go:177] * Using the hyperkit driver based on user configuration
	I1204 16:07:33.874676   22443 start.go:297] selected driver: hyperkit
	I1204 16:07:33.874693   22443 start.go:901] validating driver "hyperkit" against <nil>
	I1204 16:07:33.874705   22443 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 16:07:33.880193   22443 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:07:33.880336   22443 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 16:07:33.891426   22443 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 16:07:33.898221   22443 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:07:33.898247   22443 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 16:07:33.898280   22443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 16:07:33.898506   22443 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 16:07:33.898534   22443 cni.go:84] Creating CNI manager for ""
	I1204 16:07:33.898576   22443 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 16:07:33.898583   22443 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 16:07:33.898647   22443 start.go:340] cluster config:
	{Name:force-systemd-flag-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 16:07:33.898734   22443 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:07:33.919667   22443 out.go:177] * Starting "force-systemd-flag-492000" primary control-plane node in "force-systemd-flag-492000" cluster
	I1204 16:07:33.960672   22443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 16:07:33.960721   22443 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 16:07:33.960737   22443 cache.go:56] Caching tarball of preloaded images
	I1204 16:07:33.960876   22443 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 16:07:33.960886   22443 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 16:07:33.960968   22443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/force-systemd-flag-492000/config.json ...
	I1204 16:07:33.960988   22443 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/force-systemd-flag-492000/config.json: {Name:mkddefb101de60733627092cc3a22db27f03f403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 16:07:33.961353   22443 start.go:360] acquireMachinesLock for force-systemd-flag-492000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:08:30.827087   22443 start.go:364] duration metric: took 56.890624798s to acquireMachinesLock for "force-systemd-flag-492000"
	I1204 16:08:30.827129   22443 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:08:30.827199   22443 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:08:30.869370   22443 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:08:30.869592   22443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:08:30.869627   22443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:08:30.881069   22443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60582
	I1204 16:08:30.881499   22443 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:08:30.881947   22443 main.go:141] libmachine: Using API Version  1
	I1204 16:08:30.881959   22443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:08:30.882263   22443 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:08:30.882459   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .GetMachineName
	I1204 16:08:30.882618   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .DriverName
	I1204 16:08:30.882728   22443 start.go:159] libmachine.API.Create for "force-systemd-flag-492000" (driver="hyperkit")
	I1204 16:08:30.882756   22443 client.go:168] LocalClient.Create starting
	I1204 16:08:30.882788   22443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:08:30.882848   22443 main.go:141] libmachine: Decoding PEM data...
	I1204 16:08:30.882864   22443 main.go:141] libmachine: Parsing certificate...
	I1204 16:08:30.882917   22443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:08:30.882963   22443 main.go:141] libmachine: Decoding PEM data...
	I1204 16:08:30.882974   22443 main.go:141] libmachine: Parsing certificate...
	I1204 16:08:30.882992   22443 main.go:141] libmachine: Running pre-create checks...
	I1204 16:08:30.883000   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .PreCreateCheck
	I1204 16:08:30.883086   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:30.883253   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .GetConfigRaw
	I1204 16:08:30.890698   22443 main.go:141] libmachine: Creating machine...
	I1204 16:08:30.890729   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .Create
	I1204 16:08:30.890823   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:30.891024   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:08:30.890819   22476 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:08:30.891081   22443 main.go:141] libmachine: (force-systemd-flag-492000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:08:31.339097   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:08:31.339035   22476 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/id_rsa...
	I1204 16:08:31.395162   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:08:31.395079   22476 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk...
	I1204 16:08:31.395174   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Writing magic tar header
	I1204 16:08:31.395185   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Writing SSH key tar header
	I1204 16:08:31.395826   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:08:31.395717   22476 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000 ...
	I1204 16:08:31.778478   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:31.778495   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid
	I1204 16:08:31.778507   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Using UUID 50cdf1a7-9c8e-4338-b566-05e05cbd4ad5
	I1204 16:08:31.806617   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Generated MAC 12:4a:f2:91:2c:48
	I1204 16:08:31.806666   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000
	I1204 16:08:31.806748   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50cdf1a7-9c8e-4338-b566-05e05cbd4ad5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:08:31.806786   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50cdf1a7-9c8e-4338-b566-05e05cbd4ad5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:08:31.806846   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "50cdf1a7-9c8e-4338-b566-05e05cbd4ad5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machi
nes/force-systemd-flag-492000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000"}
	I1204 16:08:31.806884   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 50cdf1a7-9c8e-4338-b566-05e05cbd4ad5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage,/Users/jenkins/minikube-
integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000"
	I1204 16:08:31.806897   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:08:31.809945   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 DEBUG: hyperkit: Pid is 22493
	I1204 16:08:31.811483   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 0
	I1204 16:08:31.811504   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:31.811569   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:31.812737   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:31.812829   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:31.812846   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:31.812872   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:31.812892   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:31.812901   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:31.812908   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:31.812919   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:31.812947   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:31.812972   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:31.812980   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:31.812985   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:31.812993   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:31.812998   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:31.813069   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:31.813098   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:31.813112   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:31.813122   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:31.813128   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:31.813139   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:31.820256   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:08:31.828564   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:08:31.829681   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:08:31.829713   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:08:31.829727   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:08:31.829744   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:08:32.212692   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:08:32.212709   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:08:32.327202   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:08:32.327230   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:08:32.327242   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:08:32.327256   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:08:32.328090   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:08:32.328101   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:08:33.811039   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 1
	I1204 16:08:33.811056   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:33.811176   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:33.812202   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:33.812296   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:33.812307   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:33.812313   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:33.812319   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:33.812326   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:33.812331   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:33.812337   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:33.812343   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:33.812351   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:33.812361   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:33.812376   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:33.812395   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:33.812403   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:33.812411   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:33.812429   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:33.812442   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:33.812450   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:33.812456   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:33.812472   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:35.811905   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 2
	I1204 16:08:35.811921   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:35.811994   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:35.813024   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:35.813124   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:35.813138   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:35.813147   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:35.813156   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:35.813169   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:35.813181   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:35.813192   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:35.813200   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:35.813213   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:35.813224   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:35.813246   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:35.813255   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:35.813262   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:35.813268   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:35.813274   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:35.813282   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:35.813300   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:35.813310   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:35.813319   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:37.673259   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:08:37.673423   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:08:37.673435   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:08:37.693069   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:08:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:08:37.811661   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 3
	I1204 16:08:37.811673   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:37.811824   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:37.812751   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:37.812890   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:37.812900   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:37.812908   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:37.812913   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:37.812919   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:37.812924   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:37.812930   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:37.812935   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:37.812941   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:37.812948   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:37.812970   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:37.812982   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:37.812998   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:37.813006   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:37.813013   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:37.813019   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:37.813026   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:37.813033   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:37.813041   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:39.811604   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 4
	I1204 16:08:39.811621   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:39.811707   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:39.812725   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:39.812846   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:39.812858   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:39.812871   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:39.812882   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:39.812889   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:39.812894   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:39.812900   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:39.812906   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:39.812919   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:39.812928   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:39.812944   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:39.812962   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:39.812978   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:39.812989   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:39.812998   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:39.813006   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:39.813014   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:39.813028   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:39.813036   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:41.813121   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 5
	I1204 16:08:41.813136   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:41.813210   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:41.814173   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:41.814278   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:41.814288   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:41.814295   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:41.814300   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:41.814325   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:41.814336   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:41.814355   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:41.814369   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:41.814383   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:41.814408   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:41.814417   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:41.814424   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:41.814432   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:41.814443   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:41.814453   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:41.814463   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:41.814492   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:41.814504   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:41.814512   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:43.813753   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 6
	I1204 16:08:43.813767   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:43.813855   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:43.814893   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:43.815005   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:43.815028   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:43.815061   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:43.815066   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:43.815084   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:43.815090   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:43.815103   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:43.815112   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:43.815122   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:43.815130   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:43.815148   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:43.815160   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:43.815168   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:43.815176   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:43.815183   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:43.815190   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:43.815207   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:43.815218   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:43.815236   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:45.816215   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 7
	I1204 16:08:45.816230   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:45.816287   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:45.817353   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:45.817431   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:45.817451   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:45.817461   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:45.817470   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:45.817477   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:45.817485   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:45.817492   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:45.817497   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:45.817503   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:45.817509   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:45.817515   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:45.817521   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:45.817527   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:45.817541   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:45.817549   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:45.817556   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:45.817562   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:45.817570   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:45.817578   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:47.817161   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 8
	I1204 16:08:47.817186   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:47.817204   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:47.818201   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:47.818269   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:47.818279   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:47.818289   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:47.818295   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:47.818317   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:47.818330   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:47.818345   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:47.818353   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:47.818365   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:47.818377   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:47.818385   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:47.818393   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:47.818410   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:47.818420   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:47.818427   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:47.818434   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:47.818441   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:47.818446   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:47.818459   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:49.818939   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 9
	I1204 16:08:49.818951   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:49.818999   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:49.820064   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:49.820148   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:49.820158   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:49.820167   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:49.820176   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:49.820183   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:49.820188   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:49.820209   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:49.820222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:49.820229   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:49.820236   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:49.820242   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:49.820248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:49.820265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:49.820277   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:49.820285   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:49.820292   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:49.820305   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:49.820316   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:49.820326   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:51.821645   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 10
	I1204 16:08:51.821678   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:51.821713   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:51.822788   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:51.822846   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:51.822865   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:51.822885   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:51.822903   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:51.822922   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:51.822933   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:51.822940   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:51.822945   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:51.822953   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:51.822979   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:51.822985   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:51.822992   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:51.823000   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:51.823008   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:51.823016   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:51.823030   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:51.823044   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:51.823052   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:51.823059   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:53.824436   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 11
	I1204 16:08:53.824449   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:53.824536   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:53.825532   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:53.825624   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:53.825635   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:53.825644   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:53.825649   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:53.825655   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:53.825661   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:53.825667   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:53.825675   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:53.825682   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:53.825687   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:53.825693   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:53.825699   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:53.825720   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:53.825732   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:53.825750   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:53.825759   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:53.825766   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:53.825772   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:53.825790   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:55.826486   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 12
	I1204 16:08:55.826502   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:55.826617   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:55.827644   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:55.827737   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:55.827747   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:55.827755   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:55.827760   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:55.827778   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:55.827789   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:55.827796   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:55.827802   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:55.827818   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:55.827827   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:55.827835   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:55.827843   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:55.827849   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:55.827855   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:55.827869   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:55.827882   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:55.827890   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:55.827899   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:55.827907   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:57.828236   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 13
	I1204 16:08:57.828252   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:57.828286   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:57.829311   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:57.829395   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:57.829408   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:57.829422   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:57.829430   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:57.829436   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:57.829441   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:57.829457   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:57.829470   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:57.829491   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:57.829504   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:57.829513   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:57.829518   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:57.829530   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:57.829537   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:57.829547   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:57.829559   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:57.829566   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:57.829572   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:57.829592   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:59.831206   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 14
	I1204 16:08:59.831223   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:59.831264   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:08:59.832303   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:08:59.832378   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:59.832388   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:59.832406   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:59.832414   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:59.832420   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:59.832428   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:59.832435   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:59.832441   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:59.832447   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:59.832453   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:59.832472   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:59.832483   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:59.832491   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:59.832499   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:59.832511   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:59.832522   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:59.832537   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:59.832544   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:59.832556   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:01.834248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 15
	I1204 16:09:01.834261   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:01.834313   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:01.835354   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:01.835493   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:01.835503   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:01.835510   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:01.835517   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:01.835523   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:01.835529   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:01.835559   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:01.835571   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:01.835586   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:01.835598   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:01.835606   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:01.835612   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:01.835619   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:01.835626   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:01.835633   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:01.835647   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:01.835655   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:01.835669   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:01.835678   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:03.836035   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 16
	I1204 16:09:03.836046   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:03.836166   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:03.837155   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:03.837316   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:03.837327   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:03.837334   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:03.837343   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:03.837353   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:03.837359   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:03.837366   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:03.837371   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:03.837378   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:03.837384   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:03.837389   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:03.837397   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:03.837405   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:03.837413   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:03.837419   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:03.837427   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:03.837434   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:03.837441   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:03.837458   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:05.837678   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 17
	I1204 16:09:05.837690   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:05.837767   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:05.838880   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:05.838912   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:05.838919   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:05.838929   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:05.838937   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:05.838943   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:05.838954   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:05.838967   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:05.838976   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:05.838987   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:05.839001   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:05.839008   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:05.839016   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:05.839022   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:05.839028   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:05.839042   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:05.839049   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:05.839056   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:05.839069   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:05.839083   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:07.840944   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 18
	I1204 16:09:07.840957   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:07.840984   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:07.842076   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:07.842150   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:07.842158   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:07.842168   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:07.842174   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:07.842209   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:07.842222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:07.842229   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:07.842240   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:07.842248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:07.842256   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:07.842263   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:07.842268   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:07.842276   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:07.842283   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:07.842297   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:07.842304   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:07.842321   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:07.842333   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:07.842343   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:09.842787   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 19
	I1204 16:09:09.842800   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:09.842880   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:09.843950   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:09.843985   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:09.843994   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:09.844017   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:09.844026   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:09.844034   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:09.844050   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:09.844064   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:09.844076   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:09.844089   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:09.844097   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:09.844104   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:09.844110   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:09.844121   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:09.844134   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:09.844143   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:09.844151   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:09.844161   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:09.844175   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:09.844191   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:11.845031   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 20
	I1204 16:09:11.845045   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:11.845107   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:11.846140   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:11.846269   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:11.846281   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:11.846290   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:11.846296   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:11.846303   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:11.846311   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:11.846317   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:11.846324   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:11.846334   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:11.846342   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:11.846349   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:11.846356   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:11.846375   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:11.846386   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:11.846395   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:11.846403   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:11.846410   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:11.846418   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:11.846426   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:13.847539   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 21
	I1204 16:09:13.847553   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:13.847618   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:13.848624   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:13.848700   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:13.848710   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:13.848728   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:13.848734   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:13.848742   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:13.848747   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:13.848754   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:13.848759   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:13.848766   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:13.848775   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:13.848781   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:13.848790   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:13.848805   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:13.848817   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:13.848825   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:13.848831   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:13.848843   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:13.848855   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:13.848871   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:15.849898   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 22
	I1204 16:09:15.849910   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:15.849979   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:15.851072   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:15.851170   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:15.851204   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:15.851214   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:15.851222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:15.851235   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:15.851252   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:15.851270   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:15.851280   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:15.851288   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:15.851295   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:15.851302   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:15.851309   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:15.851317   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:15.851325   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:15.851332   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:15.851340   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:15.851352   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:15.851361   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:15.851378   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:17.853234   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 23
	I1204 16:09:17.853250   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:17.853373   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:17.854348   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:17.854421   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:17.854428   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:17.854438   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:17.854443   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:17.854477   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:17.854491   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:17.854506   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:17.854514   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:17.854538   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:17.854550   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:17.854558   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:17.854564   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:17.854577   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:17.854589   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:17.854596   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:17.854604   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:17.854618   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:17.854631   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:17.854640   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:19.855516   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 24
	I1204 16:09:19.855531   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:19.855570   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:19.856555   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:19.856658   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:19.856671   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:19.856679   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:19.856686   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:19.856706   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:19.856723   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:19.856736   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:19.856749   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:19.856759   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:19.856767   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:19.856780   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:19.856800   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:19.856821   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:19.856833   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:19.856840   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:19.856848   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:19.856854   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:19.856860   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:19.856866   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:21.856832   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 25
	I1204 16:09:21.856848   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:21.856900   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:21.858040   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:21.858118   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:21.858130   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:21.858138   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:21.858144   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:21.858165   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:21.858180   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:21.858188   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:21.858195   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:21.858202   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:21.858209   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:21.858218   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:21.858225   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:21.858232   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:21.858239   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:21.858246   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:21.858254   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:21.858260   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:21.858265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:21.858273   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:23.858334   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 26
	I1204 16:09:23.858350   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:23.858459   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:23.859451   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:23.859568   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:23.859576   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:23.859584   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:23.859590   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:23.859617   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:23.859632   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:23.859664   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:23.859674   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:23.859693   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:23.859711   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:23.859722   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:23.859744   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:23.859753   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:23.859760   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:23.859767   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:23.859773   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:23.859780   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:23.859787   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:23.859794   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:25.860409   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 27
	I1204 16:09:25.860425   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:25.860487   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:25.861523   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:25.861643   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:25.861655   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:25.861691   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:25.861703   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:25.861711   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:25.861718   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:25.861737   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:25.861745   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:25.861752   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:25.861759   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:25.861768   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:25.861776   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:25.861783   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:25.861790   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:25.861807   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:25.861818   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:25.861827   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:25.861833   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:25.861848   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:27.862004   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 28
	I1204 16:09:27.862017   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:27.862075   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:27.863055   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:27.863191   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:27.863203   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:27.863213   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:27.863222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:27.863244   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:27.863257   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:27.863265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:27.863273   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:27.863287   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:27.863299   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:27.863316   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:27.863328   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:27.863336   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:27.863343   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:27.863350   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:27.863373   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:27.863390   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:27.863399   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:27.863408   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:29.864591   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 29
	I1204 16:09:29.864606   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:29.864675   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:29.865676   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for 12:4a:f2:91:2c:48 in /var/db/dhcpd_leases ...
	I1204 16:09:29.865752   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:09:29.865761   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:09:29.865769   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:09:29.865774   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:09:29.865780   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:09:29.865789   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:09:29.865797   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:09:29.865802   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:09:29.865817   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:09:29.865826   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:09:29.865834   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:09:29.865847   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:09:29.865863   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:09:29.865876   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:09:29.865883   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:09:29.865892   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:09:29.865899   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:09:29.865905   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:09:29.865914   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:09:31.865967   22443 client.go:171] duration metric: took 1m1.002624975s to LocalClient.Create
	I1204 16:09:33.868054   22443 start.go:128] duration metric: took 1m3.060377691s to createHost
	I1204 16:09:33.868090   22443 start.go:83] releasing machines lock for "force-systemd-flag-492000", held for 1m3.060530114s
	W1204 16:09:33.868127   22443 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:4a:f2:91:2c:48
	I1204 16:09:33.868474   22443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:09:33.868561   22443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:09:33.880247   22443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60598
	I1204 16:09:33.880742   22443 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:09:33.881222   22443 main.go:141] libmachine: Using API Version  1
	I1204 16:09:33.881255   22443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:09:33.881578   22443 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:09:33.881924   22443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:09:33.881960   22443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:09:33.893097   22443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60600
	I1204 16:09:33.893437   22443 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:09:33.893784   22443 main.go:141] libmachine: Using API Version  1
	I1204 16:09:33.893795   22443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:09:33.894048   22443 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:09:33.894187   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .GetState
	I1204 16:09:33.894293   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.894362   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:33.895574   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .DriverName
	I1204 16:09:33.931325   22443 out.go:177] * Deleting "force-systemd-flag-492000" in hyperkit ...
	I1204 16:09:33.973359   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .Remove
	I1204 16:09:33.973500   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.973510   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.973585   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:33.974779   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:33.974829   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | waiting for graceful shutdown
	I1204 16:09:34.975024   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:34.975165   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:34.976367   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | waiting for graceful shutdown
	I1204 16:09:35.976482   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:35.976561   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:35.977766   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | waiting for graceful shutdown
	I1204 16:09:36.977973   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:36.978044   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:36.978766   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | waiting for graceful shutdown
	I1204 16:09:37.979746   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:37.979909   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:37.981222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | waiting for graceful shutdown
	I1204 16:09:38.982026   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:09:38.982096   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22493
	I1204 16:09:38.982827   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | sending sigkill
	I1204 16:09:38.982836   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1204 16:09:38.994505   22443 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:4a:f2:91:2c:48
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:4a:f2:91:2c:48
	I1204 16:09:38.994527   22443 start.go:729] Will try again in 5 seconds ...
	I1204 16:09:39.004726   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:09:39 WARN : hyperkit: failed to read stderr: EOF
	I1204 16:09:39.004743   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:09:39 WARN : hyperkit: failed to read stdout: EOF
	I1204 16:09:43.996122   22443 start.go:360] acquireMachinesLock for force-systemd-flag-492000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:10:36.745774   22443 start.go:364] duration metric: took 52.749776692s to acquireMachinesLock for "force-systemd-flag-492000"
	I1204 16:10:36.745797   22443 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:10:36.745875   22443 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:10:36.766895   22443 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:10:36.766974   22443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:10:36.766996   22443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:10:36.778188   22443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60608
	I1204 16:10:36.778548   22443 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:10:36.778893   22443 main.go:141] libmachine: Using API Version  1
	I1204 16:10:36.778923   22443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:10:36.779183   22443 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:10:36.779292   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .GetMachineName
	I1204 16:10:36.779387   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .DriverName
	I1204 16:10:36.779498   22443 start.go:159] libmachine.API.Create for "force-systemd-flag-492000" (driver="hyperkit")
	I1204 16:10:36.779510   22443 client.go:168] LocalClient.Create starting
	I1204 16:10:36.779541   22443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:10:36.779609   22443 main.go:141] libmachine: Decoding PEM data...
	I1204 16:10:36.779624   22443 main.go:141] libmachine: Parsing certificate...
	I1204 16:10:36.779666   22443 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:10:36.779711   22443 main.go:141] libmachine: Decoding PEM data...
	I1204 16:10:36.779719   22443 main.go:141] libmachine: Parsing certificate...
	I1204 16:10:36.779731   22443 main.go:141] libmachine: Running pre-create checks...
	I1204 16:10:36.779737   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .PreCreateCheck
	I1204 16:10:36.779819   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.779855   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .GetConfigRaw
	I1204 16:10:36.808889   22443 main.go:141] libmachine: Creating machine...
	I1204 16:10:36.808908   22443 main.go:141] libmachine: (force-systemd-flag-492000) Calling .Create
	I1204 16:10:36.808986   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:36.809173   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:10:36.808986   22555 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:10:36.809272   22443 main.go:141] libmachine: (force-systemd-flag-492000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:10:37.038061   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:10:37.037963   22555 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/id_rsa...
	I1204 16:10:37.268580   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:10:37.268492   22555 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk...
	I1204 16:10:37.268594   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Writing magic tar header
	I1204 16:10:37.268609   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Writing SSH key tar header
	I1204 16:10:37.269170   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | I1204 16:10:37.269129   22555 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000 ...
	I1204 16:10:37.651492   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:37.651514   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid
	I1204 16:10:37.651528   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Using UUID 3b99fe22-1715-40c7-b7ba-cc9921148bc2
	I1204 16:10:37.676897   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Generated MAC aa:05:b9:1b:8c:a2
	I1204 16:10:37.676915   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000
	I1204 16:10:37.676944   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b99fe22-1715-40c7-b7ba-cc9921148bc2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:10:37.676975   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b99fe22-1715-40c7-b7ba-cc9921148bc2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:10:37.677028   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3b99fe22-1715-40c7-b7ba-cc9921148bc2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machi
nes/force-systemd-flag-492000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000"}
	I1204 16:10:37.677068   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3b99fe22-1715-40c7-b7ba-cc9921148bc2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/force-systemd-flag-492000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/bzimage,/Users/jenkins/minikube-
integration/20045-17258/.minikube/machines/force-systemd-flag-492000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-492000"
	I1204 16:10:37.677112   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:10:37.680065   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 DEBUG: hyperkit: Pid is 22556
	I1204 16:10:37.680540   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 0
	I1204 16:10:37.680552   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:37.680567   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:37.681685   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:37.681795   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:37.681818   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:37.681864   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:37.681881   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:37.681894   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:37.681914   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:37.681925   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:37.681936   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:37.681971   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:37.681992   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:37.682011   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:37.682023   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:37.682045   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:37.682068   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:37.682089   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:37.682108   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:37.682123   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:37.682134   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:37.682159   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:37.690566   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:10:37.699081   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-flag-492000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:10:37.700095   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:10:37.700146   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:10:37.700178   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:10:37.700206   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:10:38.088902   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:10:38.088916   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:10:38.203626   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:10:38.203657   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:10:38.203675   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:10:38.203684   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:10:38.204546   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:10:38.204556   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:10:39.683446   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 1
	I1204 16:10:39.683465   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:39.683521   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:39.684581   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:39.684659   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:39.684669   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:39.684681   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:39.684693   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:39.684712   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:39.684724   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:39.684732   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:39.684741   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:39.684749   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:39.684762   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:39.684769   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:39.684776   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:39.684786   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:39.684797   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:39.684805   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:39.684811   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:39.684823   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:39.684835   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:39.684853   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:41.686469   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 2
	I1204 16:10:41.686484   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:41.686617   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:41.687758   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:41.687913   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:41.687922   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:41.687928   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:41.687937   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:41.687950   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:41.687955   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:41.687966   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:41.687996   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:41.688009   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:41.688017   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:41.688026   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:41.688033   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:41.688040   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:41.688048   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:41.688058   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:41.688065   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:41.688072   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:41.688080   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:41.688085   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:43.537735   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:10:43.537860   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:10:43.537871   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:10:43.557271   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | 2024/12/04 16:10:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:10:43.688184   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 3
	I1204 16:10:43.688233   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:43.688404   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:43.690163   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:43.690391   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:43.690408   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:43.690417   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:43.690433   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:43.690458   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:43.690476   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:43.690486   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:43.690494   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:43.690503   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:43.690514   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:43.690530   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:43.690541   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:43.690552   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:43.690560   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:43.690577   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:43.690601   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:43.690612   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:43.690623   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:43.690631   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:45.691982   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 4
	I1204 16:10:45.691998   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:45.692119   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:45.693082   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:45.693184   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:45.693199   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:45.693208   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:45.693217   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:45.693223   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:45.693230   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:45.693240   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:45.693248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:45.693257   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:45.693265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:45.693271   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:45.693279   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:45.693285   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:45.693293   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:45.693299   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:45.693305   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:45.693313   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:45.693319   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:45.693327   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:47.695352   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 5
	I1204 16:10:47.695374   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:47.695431   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:47.696397   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:47.696495   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:47.696507   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:47.696514   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:47.696520   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:47.696528   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:47.696536   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:47.696551   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:47.696572   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:47.696597   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:47.696610   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:47.696618   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:47.696626   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:47.696633   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:47.696642   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:47.696651   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:47.696657   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:47.696665   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:47.696673   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:47.696680   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:49.697520   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 6
	I1204 16:10:49.697535   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:49.697597   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:49.698579   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:49.698712   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:49.698741   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:49.698749   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:49.698755   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:49.698761   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:49.698777   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:49.698790   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:49.698800   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:49.698822   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:49.698831   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:49.698840   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:49.698848   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:49.698855   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:49.698866   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:49.698873   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:49.698880   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:49.698896   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:49.698908   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:49.698918   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:51.699004   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 7
	I1204 16:10:51.699019   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:51.699083   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:51.700078   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:51.700155   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:51.700167   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:51.700177   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:51.700189   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:51.700204   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:51.700219   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:51.700248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:51.700259   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:51.700275   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:51.700287   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:51.700305   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:51.700317   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:51.700333   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:51.700341   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:51.700347   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:51.700352   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:51.700359   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:51.700367   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:51.700383   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:53.702356   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 8
	I1204 16:10:53.702368   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:53.702408   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:53.703385   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:53.703470   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:53.703479   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:53.703487   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:53.703500   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:53.703508   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:53.703513   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:53.703519   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:53.703528   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:53.703535   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:53.703542   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:53.703565   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:53.703576   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:53.703586   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:53.703592   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:53.703615   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:53.703630   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:53.703638   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:53.703652   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:53.703661   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:55.703753   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 9
	I1204 16:10:55.703768   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:55.703836   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:55.704828   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:55.704925   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:55.704935   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:55.704950   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:55.704957   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:55.704964   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:55.704969   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:55.704976   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:55.704981   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:55.704987   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:55.704995   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:55.705001   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:55.705008   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:55.705015   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:55.705030   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:55.705049   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:55.705063   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:55.705078   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:55.705090   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:55.705100   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:57.705504   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 10
	I1204 16:10:57.705520   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:57.705552   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:57.706591   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:57.706681   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:57.706689   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:57.706700   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:57.706707   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:57.706726   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:57.706740   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:57.706751   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:57.706758   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:57.706765   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:57.706771   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:57.706787   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:57.706796   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:57.706804   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:57.706811   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:57.706822   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:57.706849   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:57.706884   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:57.706895   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:57.706904   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:10:59.707352   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 11
	I1204 16:10:59.707367   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:10:59.707497   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:10:59.708516   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:10:59.708612   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:10:59.708631   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:10:59.708652   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:10:59.708660   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:10:59.708671   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:10:59.708677   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:10:59.708683   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:10:59.708691   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:10:59.708698   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:10:59.708704   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:10:59.708715   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:10:59.708724   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:10:59.708731   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:10:59.708739   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:10:59.708755   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:10:59.708771   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:10:59.708780   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:10:59.708787   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:10:59.708804   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:01.710790   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 12
	I1204 16:11:01.710806   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:01.710885   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:01.711868   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:01.711928   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:01.711942   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:01.711954   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:01.711963   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:01.711969   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:01.711976   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:01.711983   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:01.711988   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:01.712000   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:01.712013   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:01.712023   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:01.712031   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:01.712038   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:01.712045   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:01.712052   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:01.712059   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:01.712076   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:01.712088   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:01.712097   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:03.714200   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 13
	I1204 16:11:03.714214   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:03.714284   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:03.715303   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:03.715453   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:03.715463   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:03.715470   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:03.715482   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:03.715489   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:03.715497   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:03.715514   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:03.715522   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:03.715537   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:03.715564   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:03.715604   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:03.715617   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:03.715631   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:03.715640   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:03.715646   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:03.715666   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:03.715677   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:03.715686   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:03.715695   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:05.716563   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 14
	I1204 16:11:05.716579   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:05.716664   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:05.717863   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:05.717948   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:05.717956   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:05.717965   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:05.717976   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:05.717995   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:05.718008   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:05.718025   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:05.718034   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:05.718046   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:05.718052   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:05.718058   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:05.718064   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:05.718079   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:05.718093   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:05.718101   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:05.718109   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:05.718117   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:05.718124   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:05.718144   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:07.718852   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 15
	I1204 16:11:07.718874   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:07.718923   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:07.719992   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:07.720158   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:07.720168   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:07.720175   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:07.720187   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:07.720194   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:07.720200   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:07.720214   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:07.720222   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:07.720229   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:07.720241   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:07.720248   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:07.720253   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:07.720260   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:07.720265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:07.720279   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:07.720294   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:07.720302   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:07.720311   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:07.720320   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:09.720592   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 16
	I1204 16:11:09.720606   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:09.720692   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:09.721775   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:09.721944   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:09.721954   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:09.721961   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:09.721968   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:09.721975   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:09.721981   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:09.721997   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:09.722006   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:09.722013   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:09.722021   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:09.722028   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:09.722040   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:09.722047   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:09.722054   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:09.722062   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:09.722071   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:09.722088   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:09.722101   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:09.722118   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:11.722746   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 17
	I1204 16:11:11.722762   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:11.722864   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:11.723963   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:11.724074   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:11.724086   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:11.724095   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:11.724101   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:11.724115   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:11.724126   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:11.724135   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:11.724146   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:11.724152   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:11.724161   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:11.724172   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:11.724180   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:11.724191   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:11.724199   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:11.724206   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:11.724214   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:11.724224   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:11.724232   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:11.724240   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:13.726335   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 18
	I1204 16:11:13.726351   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:13.726393   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:13.727347   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:13.727439   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:13.727449   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:13.727459   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:13.727478   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:13.727489   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:13.727497   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:13.727504   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:13.727513   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:13.727528   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:13.727542   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:13.727551   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:13.727559   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:13.727567   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:13.727575   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:13.727590   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:13.727601   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:13.727608   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:13.727614   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:13.727622   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:15.727763   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 19
	I1204 16:11:15.727779   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:15.727845   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:15.728839   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:15.728952   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:15.728960   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:15.728969   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:15.728974   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:15.728982   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:15.728987   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:15.728997   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:15.729003   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:15.729010   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:15.729021   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:15.729035   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:15.729046   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:15.729053   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:15.729061   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:15.729075   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:15.729098   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:15.729114   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:15.729126   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:15.729136   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:17.730545   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 20
	I1204 16:11:17.730557   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:17.730614   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:17.731637   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:17.731750   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:17.731760   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:17.731766   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:17.731779   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:17.731787   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:17.731794   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:17.731808   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:17.731823   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:17.731835   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:17.731844   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:17.731860   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:17.731873   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:17.731890   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:17.731899   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:17.731906   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:17.731914   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:17.731921   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:17.731929   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:17.731937   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:19.733875   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 21
	I1204 16:11:19.733887   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:19.733907   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:19.734911   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:19.734989   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:19.734997   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:19.735004   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:19.735010   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:19.735015   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:19.735020   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:19.735027   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:19.735044   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:19.735059   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:19.735078   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:19.735091   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:19.735097   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:19.735104   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:19.735111   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:19.735117   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:19.735124   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:19.735134   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:19.735140   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:19.735148   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:21.735781   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 22
	I1204 16:11:21.735797   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:21.735868   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:21.736843   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:21.736939   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:21.736950   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:21.736956   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:21.736962   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:21.736970   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:21.736979   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:21.736986   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:21.736992   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:21.736999   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:21.737005   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:21.737011   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:21.737018   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:21.737033   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:21.737046   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:21.737060   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:21.737072   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:21.737079   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:21.737085   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:21.737099   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:23.739112   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 23
	I1204 16:11:23.739126   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:23.739242   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:23.740325   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:23.740438   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:23.740461   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:23.740495   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:23.740501   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:23.740507   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:23.740513   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:23.740520   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:23.740535   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:23.740553   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:23.740565   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:23.740577   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:23.740586   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:23.740607   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:23.740620   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:23.740631   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:23.740641   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:23.740650   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:23.740658   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:23.740666   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:25.742710   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 24
	I1204 16:11:25.742724   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:25.742772   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:25.744093   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:25.744225   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:25.744237   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:25.744244   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:25.744250   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:25.744277   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:25.744292   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:25.744300   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:25.744308   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:25.744323   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:25.744334   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:25.744347   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:25.744356   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:25.744371   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:25.744384   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:25.744393   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:25.744400   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:25.744407   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:25.744414   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:25.744423   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:27.746437   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 25
	I1204 16:11:27.746450   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:27.746494   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:27.747607   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:27.747706   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:27.747736   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:27.747747   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:27.747756   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:27.747786   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:27.747796   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:27.747806   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:27.747827   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:27.747834   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:27.747842   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:27.747851   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:27.747860   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:27.747868   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:27.747875   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:27.747883   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:27.747891   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:27.747898   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:27.747905   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:27.747917   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:29.749886   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 26
	I1204 16:11:29.749899   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:29.749963   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:29.750990   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:29.751091   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:29.751099   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:29.751119   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:29.751127   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:29.751137   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:29.751145   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:29.751153   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:29.751159   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:29.751172   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:29.751184   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:29.751192   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:29.751200   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:29.751214   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:29.751223   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:29.751230   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:29.751238   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:29.751244   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:29.751250   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:29.751255   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:31.753289   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 27
	I1204 16:11:31.753305   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:31.753366   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:31.754486   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:31.754571   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:31.754583   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:31.754596   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:31.754610   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:31.754618   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:31.754623   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:31.754630   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:31.754636   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:31.754643   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:31.754650   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:31.754657   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:31.754664   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:31.754680   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:31.754691   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:31.754707   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:31.754715   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:31.754722   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:31.754730   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:31.754751   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:33.756439   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 28
	I1204 16:11:33.756884   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:33.757078   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:33.757563   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:33.757693   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:33.757708   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:33.757731   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:33.757753   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:33.757764   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:33.757769   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:33.757831   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:33.757857   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:33.757935   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:33.757949   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:33.758265   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:33.758295   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:33.758303   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:33.758309   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:33.758315   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:33.758320   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:33.758343   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:33.758353   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:33.758362   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:35.758209   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Attempt 29
	I1204 16:11:35.758226   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:11:35.758267   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | hyperkit pid from json: 22556
	I1204 16:11:35.759300   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Searching for aa:05:b9:1b:8c:a2 in /var/db/dhcpd_leases ...
	I1204 16:11:35.759390   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:11:35.759399   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:11:35.759409   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:11:35.759416   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:11:35.759423   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:11:35.759429   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:11:35.759435   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:11:35.759441   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:11:35.759448   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:11:35.759455   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:11:35.759462   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:11:35.759471   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:11:35.759501   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:11:35.759509   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:11:35.759515   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:11:35.759524   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:11:35.759531   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:11:35.759538   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:11:35.759546   22443 main.go:141] libmachine: (force-systemd-flag-492000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:11:37.761617   22443 client.go:171] duration metric: took 1m0.982083144s to LocalClient.Create
	I1204 16:11:39.763600   22443 start.go:128] duration metric: took 1m3.017699485s to createHost
	I1204 16:11:39.763634   22443 start.go:83] releasing machines lock for "force-systemd-flag-492000", held for 1m3.017831983s
	W1204 16:11:39.763724   22443 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-492000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:05:b9:1b:8c:a2
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-492000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:05:b9:1b:8c:a2
	I1204 16:11:39.827000   22443 out.go:201] 
	W1204 16:11:39.847982   22443 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:05:b9:1b:8c:a2
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:05:b9:1b:8c:a2
	W1204 16:11:39.847997   22443 out.go:270] * 
	* 
	W1204 16:11:39.848657   22443 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 16:11:39.910820   22443 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-492000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-492000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-492000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (222.173028ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-492000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-492000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-04 16:11:40.255187 -0800 PST m=+3550.991690414
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-492000 -n force-systemd-flag-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-492000 -n force-systemd-flag-492000: exit status 7 (100.979303ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:11:40.353729   22579 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:11:40.353750   22579 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-492000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-492000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-492000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-492000: (5.284839912s)
--- FAIL: TestForceSystemdFlag (252.20s)

                                                
                                    
x
+
TestForceSystemdEnv (232.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E1204 16:05:53.951212   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:06:36.661547   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m46.896607598s)

                                                
                                                
-- stdout --
	* [force-systemd-env-608000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-608000" primary control-plane node in "force-systemd-env-608000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-608000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 16:04:44.206362   22359 out.go:345] Setting OutFile to fd 1 ...
	I1204 16:04:44.206666   22359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:04:44.206672   22359 out.go:358] Setting ErrFile to fd 2...
	I1204 16:04:44.206676   22359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 16:04:44.206846   22359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 16:04:44.208457   22359 out.go:352] Setting JSON to false
	I1204 16:04:44.236431   22359 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7454,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 16:04:44.236534   22359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 16:04:44.262107   22359 out.go:177] * [force-systemd-env-608000] minikube v1.34.0 on Darwin 15.0.1
	I1204 16:04:44.307227   22359 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 16:04:44.307329   22359 notify.go:220] Checking for updates...
	I1204 16:04:44.351733   22359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 16:04:44.372384   22359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 16:04:44.393002   22359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 16:04:44.414215   22359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:04:44.435176   22359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1204 16:04:44.456485   22359 config.go:182] Loaded profile config "offline-docker-182000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 16:04:44.456584   22359 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 16:04:44.488313   22359 out.go:177] * Using the hyperkit driver based on user configuration
	I1204 16:04:44.530051   22359 start.go:297] selected driver: hyperkit
	I1204 16:04:44.530065   22359 start.go:901] validating driver "hyperkit" against <nil>
	I1204 16:04:44.530075   22359 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 16:04:44.535354   22359 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:04:44.535497   22359 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 16:04:44.546270   22359 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 16:04:44.552831   22359 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:04:44.552856   22359 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 16:04:44.552892   22359 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 16:04:44.553137   22359 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 16:04:44.553166   22359 cni.go:84] Creating CNI manager for ""
	I1204 16:04:44.553200   22359 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 16:04:44.553209   22359 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 16:04:44.553276   22359 start.go:340] cluster config:
	{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 16:04:44.553370   22359 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 16:04:44.595236   22359 out.go:177] * Starting "force-systemd-env-608000" primary control-plane node in "force-systemd-env-608000" cluster
	I1204 16:04:44.616096   22359 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 16:04:44.616125   22359 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 16:04:44.616136   22359 cache.go:56] Caching tarball of preloaded images
	I1204 16:04:44.616253   22359 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 16:04:44.616262   22359 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 16:04:44.616338   22359 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/force-systemd-env-608000/config.json ...
	I1204 16:04:44.616356   22359 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/force-systemd-env-608000/config.json: {Name:mkac82c31dab43ebe6d7cfcf081edcc6ac828775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 16:04:44.616718   22359 start.go:360] acquireMachinesLock for force-systemd-env-608000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:05:22.042510   22359 start.go:364] duration metric: took 37.424648846s to acquireMachinesLock for "force-systemd-env-608000"
	I1204 16:05:22.042555   22359 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:05:22.042625   22359 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:05:22.063878   22359 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:05:22.064065   22359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:05:22.064107   22359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:05:22.075340   22359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60562
	I1204 16:05:22.075773   22359 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:05:22.076207   22359 main.go:141] libmachine: Using API Version  1
	I1204 16:05:22.076218   22359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:05:22.076457   22359 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:05:22.076552   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .GetMachineName
	I1204 16:05:22.076650   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .DriverName
	I1204 16:05:22.076746   22359 start.go:159] libmachine.API.Create for "force-systemd-env-608000" (driver="hyperkit")
	I1204 16:05:22.076773   22359 client.go:168] LocalClient.Create starting
	I1204 16:05:22.076808   22359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:05:22.076870   22359 main.go:141] libmachine: Decoding PEM data...
	I1204 16:05:22.076886   22359 main.go:141] libmachine: Parsing certificate...
	I1204 16:05:22.076953   22359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:05:22.076999   22359 main.go:141] libmachine: Decoding PEM data...
	I1204 16:05:22.077008   22359 main.go:141] libmachine: Parsing certificate...
	I1204 16:05:22.077021   22359 main.go:141] libmachine: Running pre-create checks...
	I1204 16:05:22.077026   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .PreCreateCheck
	I1204 16:05:22.077104   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.077322   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .GetConfigRaw
	I1204 16:05:22.148651   22359 main.go:141] libmachine: Creating machine...
	I1204 16:05:22.148660   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .Create
	I1204 16:05:22.148757   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.148924   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:05:22.148747   22381 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:05:22.148980   22359 main.go:141] libmachine: (force-systemd-env-608000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:05:22.359215   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:05:22.359133   22381 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/id_rsa...
	I1204 16:05:22.442437   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:05:22.442365   22381 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk...
	I1204 16:05:22.442448   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Writing magic tar header
	I1204 16:05:22.442458   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Writing SSH key tar header
	I1204 16:05:22.443048   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:05:22.443006   22381 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000 ...
	I1204 16:05:22.823186   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.823204   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid
	I1204 16:05:22.823214   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Using UUID 55015a8f-150d-494b-b211-8d889756ad13
	I1204 16:05:22.849241   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Generated MAC ce:7d:63:7c:36:a8
	I1204 16:05:22.849263   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000
	I1204 16:05:22.849300   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"55015a8f-150d-494b-b211-8d889756ad13", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:05:22.849344   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"55015a8f-150d-494b-b211-8d889756ad13", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:05:22.849387   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "55015a8f-150d-494b-b211-8d889756ad13", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/for
ce-systemd-env-608000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000"}
	I1204 16:05:22.849437   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 55015a8f-150d-494b-b211-8d889756ad13 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage,/Users/jenkins/minikube-integrat
ion/20045-17258/.minikube/machines/force-systemd-env-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000"
	I1204 16:05:22.849454   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:05:22.852621   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 DEBUG: hyperkit: Pid is 22382
	I1204 16:05:22.853142   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 0
	I1204 16:05:22.853158   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:22.853229   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:22.854406   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:22.854569   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:22.854589   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:22.854610   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:22.854647   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:22.854663   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:22.854689   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:22.854731   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:22.854747   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:22.854767   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:22.854785   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:22.854804   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:22.854823   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:22.854835   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:22.854865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:22.854876   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:22.854887   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:22.854898   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:22.854909   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:22.854918   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:22.863300   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:05:22.872424   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:05:22.873188   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:05:22.873215   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:05:22.873223   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:05:22.873229   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:05:23.258512   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:05:23.258529   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:05:23.373229   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:05:23.373257   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:05:23.373267   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:05:23.373274   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:05:23.374119   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:05:23.374129   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:05:24.856835   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 1
	I1204 16:05:24.856851   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:24.856943   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:24.857992   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:24.858078   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:24.858102   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:24.858117   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:24.858128   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:24.858136   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:24.858144   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:24.858149   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:24.858156   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:24.858168   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:24.858174   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:24.858180   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:24.858186   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:24.858192   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:24.858198   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:24.858204   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:24.858211   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:24.858228   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:24.858239   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:24.858260   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:26.858822   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 2
	I1204 16:05:26.858842   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:26.858881   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:26.859865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:26.859963   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:26.859971   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:26.859979   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:26.859985   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:26.859992   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:26.860001   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:26.860017   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:26.860027   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:26.860036   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:26.860052   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:26.860060   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:26.860067   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:26.860073   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:26.860086   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:26.860092   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:26.860099   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:26.860108   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:26.860124   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:26.860136   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:28.735516   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:05:28.735616   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:05:28.735625   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:05:28.755164   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:05:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:05:28.862329   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 3
	I1204 16:05:28.862355   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:28.862633   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:28.864546   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:28.864722   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:28.864736   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:28.864745   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:28.864753   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:28.864761   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:28.864783   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:28.864793   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:28.864813   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:28.864854   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:28.864871   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:28.864893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:28.864904   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:28.864914   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:28.864924   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:28.864934   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:28.864942   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:28.864951   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:28.864961   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:28.864994   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:30.866394   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 4
	I1204 16:05:30.866443   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:30.866495   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:30.867505   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:30.867592   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:30.867600   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:30.867610   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:30.867615   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:30.867622   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:30.867633   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:30.867655   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:30.867668   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:30.867678   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:30.867687   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:30.867708   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:30.867716   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:30.867723   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:30.867730   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:30.867737   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:30.867742   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:30.867749   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:30.867757   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:30.867765   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:32.868734   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 5
	I1204 16:05:32.868749   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:32.868817   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:32.869804   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:32.869888   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:32.869898   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:32.869914   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:32.869921   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:32.869929   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:32.869937   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:32.869945   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:32.869955   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:32.869961   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:32.869974   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:32.869983   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:32.869989   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:32.869995   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:32.870008   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:32.870021   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:32.870035   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:32.870048   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:32.870058   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:32.870066   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:34.870664   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 6
	I1204 16:05:34.870679   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:34.870775   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:34.871750   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:34.871843   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:34.871853   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:34.871860   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:34.871865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:34.871873   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:34.871879   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:34.871886   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:34.871893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:34.871900   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:34.871906   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:34.871917   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:34.871925   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:34.871931   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:34.871937   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:34.871943   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:34.871950   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:34.871971   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:34.871982   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:34.871993   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:36.874053   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 7
	I1204 16:05:36.874069   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:36.874141   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:36.875186   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:36.875276   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:36.875284   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:36.875292   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:36.875304   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:36.875316   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:36.875329   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:36.875336   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:36.875342   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:36.875356   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:36.875365   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:36.875372   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:36.875379   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:36.875386   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:36.875394   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:36.875407   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:36.875414   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:36.875421   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:36.875428   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:36.875440   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:38.875582   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 8
	I1204 16:05:38.875595   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:38.875667   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:38.876679   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:38.876768   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:38.876775   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:38.876792   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:38.876798   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:38.876805   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:38.876811   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:38.876826   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:38.876839   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:38.876862   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:38.876874   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:38.876882   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:38.876889   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:38.876898   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:38.876905   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:38.876913   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:38.876920   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:38.876927   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:38.876933   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:38.876941   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:40.879059   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 9
	I1204 16:05:40.879073   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:40.879125   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:40.880206   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:40.880257   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:40.880270   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:40.880279   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:40.880285   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:40.880315   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:40.880346   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:40.880379   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:40.880397   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:40.880405   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:40.880410   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:40.880417   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:40.880422   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:40.880438   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:40.880451   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:40.880458   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:40.880465   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:40.880471   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:40.880477   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:40.880485   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:42.882561   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 10
	I1204 16:05:42.882574   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:42.882643   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:42.883674   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:42.883747   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:42.883758   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:42.883766   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:42.883772   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:42.883779   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:42.883785   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:42.883799   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:42.883806   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:42.883812   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:42.883833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:42.883839   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:42.883846   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:42.883857   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:42.883865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:42.883873   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:42.883879   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:42.883887   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:42.883893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:42.883899   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:44.884000   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 11
	I1204 16:05:44.884015   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:44.884135   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:44.885122   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:44.885200   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:44.885210   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:44.885228   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:44.885235   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:44.885268   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:44.885278   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:44.885285   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:44.885290   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:44.885299   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:44.885306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:44.885323   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:44.885338   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:44.885346   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:44.885355   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:44.885362   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:44.885372   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:44.885379   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:44.885387   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:44.885404   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:46.886055   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 12
	I1204 16:05:46.886067   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:46.886132   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:46.887287   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:46.887381   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:46.887394   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:46.887411   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:46.887418   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:46.887432   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:46.887439   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:46.887445   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:46.887453   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:46.887459   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:46.887465   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:46.887473   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:46.887481   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:46.887496   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:46.887510   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:46.887518   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:46.887531   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:46.887553   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:46.887565   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:46.887580   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:48.889654   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 13
	I1204 16:05:48.889670   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:48.889716   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:48.890769   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:48.890858   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:48.890870   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:48.890878   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:48.890889   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:48.890906   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:48.890919   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:48.890927   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:48.890933   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:48.890948   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:48.890960   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:48.890969   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:48.890975   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:48.890992   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:48.891004   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:48.891012   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:48.891017   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:48.891024   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:48.891032   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:48.891041   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:50.892179   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 14
	I1204 16:05:50.892195   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:50.892241   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:50.893510   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:50.893613   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:50.893620   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:50.893627   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:50.893634   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:50.893643   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:50.893651   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:50.893658   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:50.893666   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:50.893674   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:50.893682   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:50.893688   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:50.893696   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:50.893702   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:50.893707   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:50.893715   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:50.893722   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:50.893729   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:50.893774   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:50.893789   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:52.895571   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 15
	I1204 16:05:52.895584   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:52.895640   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:52.896638   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:52.896748   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:52.896758   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:52.896765   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:52.896774   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:52.896783   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:52.896791   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:52.896801   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:52.896810   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:52.896816   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:52.896823   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:52.896831   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:52.896849   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:52.896861   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:52.896869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:52.896877   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:52.896884   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:52.896904   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:52.896910   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:52.896919   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:54.897405   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 16
	I1204 16:05:54.897417   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:54.897544   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:54.898532   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:54.898655   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:54.898664   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:54.898673   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:54.898688   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:54.898697   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:54.898703   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:54.898711   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:54.898726   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:54.898739   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:54.898753   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:54.898766   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:54.898778   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:54.898786   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:54.898794   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:54.898801   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:54.898807   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:54.898814   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:54.898827   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:54.898836   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:56.899127   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 17
	I1204 16:05:56.899142   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:56.899262   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:56.900253   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:56.900387   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:56.900397   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:56.900404   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:56.900409   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:56.900426   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:56.900435   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:56.900458   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:56.900470   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:56.900490   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:56.900502   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:56.900511   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:56.900520   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:56.900544   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:56.900551   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:56.900561   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:56.900567   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:56.900573   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:56.900582   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:56.900590   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:05:58.900705   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 18
	I1204 16:05:58.900717   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:05:58.900791   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:05:58.901839   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:05:58.901905   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:05:58.901913   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:05:58.901935   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:05:58.901941   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:05:58.901947   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:05:58.901953   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:05:58.901960   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:05:58.901965   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:05:58.901979   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:05:58.901988   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:05:58.901999   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:05:58.902007   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:05:58.902014   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:05:58.902022   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:05:58.902028   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:05:58.902038   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:05:58.902045   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:05:58.902053   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:05:58.902060   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:00.904133   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 19
	I1204 16:06:00.904146   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:00.904175   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:00.905162   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:00.905228   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:00.905238   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:00.905247   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:00.905253   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:00.905269   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:00.905279   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:00.905285   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:00.905291   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:00.905299   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:00.905306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:00.905323   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:00.905337   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:00.905344   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:00.905351   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:00.905358   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:00.905365   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:00.905372   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:00.905386   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:00.905395   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:02.905681   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 20
	I1204 16:06:02.905695   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:02.905755   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:02.906744   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:02.906877   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:02.906887   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:02.906898   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:02.906903   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:02.906911   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:02.906917   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:02.906924   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:02.906930   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:02.906936   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:02.906942   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:02.906948   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:02.906957   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:02.906963   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:02.906970   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:02.906976   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:02.906982   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:02.906995   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:02.907007   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:02.907017   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:04.908355   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 21
	I1204 16:06:04.908369   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:04.908442   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:04.909455   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:04.909535   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:04.909550   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:04.909571   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:04.909581   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:04.909588   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:04.909594   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:04.909608   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:04.909619   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:04.909629   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:04.909636   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:04.909646   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:04.909652   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:04.909658   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:04.909663   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:04.909692   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:04.909704   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:04.909715   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:04.909722   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:04.909729   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:06.910615   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 22
	I1204 16:06:06.910630   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:06.910685   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:06.911688   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:06.911768   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:06.911777   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:06.911785   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:06.911792   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:06.911799   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:06.911804   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:06.911810   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:06.911825   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:06.911833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:06.911841   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:06.911848   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:06.911855   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:06.911863   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:06.911870   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:06.911876   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:06.911893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:06.911905   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:06.911921   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:06.911933   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:08.912852   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 23
	I1204 16:06:08.912865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:08.912955   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:08.914208   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:08.914290   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:08.914298   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:08.914306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:08.914312   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:08.914318   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:08.914323   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:08.914330   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:08.914337   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:08.914350   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:08.914362   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:08.914374   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:08.914383   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:08.914392   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:08.914408   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:08.914425   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:08.914438   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:08.914446   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:08.914454   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:08.914463   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:10.914656   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 24
	I1204 16:06:10.914672   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:10.914749   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:10.915736   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:10.915848   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:10.915882   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:10.915899   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:10.915905   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:10.915912   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:10.915918   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:10.915925   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:10.915930   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:10.915937   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:10.915944   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:10.915950   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:10.915955   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:10.915967   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:10.915978   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:10.915985   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:10.915992   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:10.915998   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:10.916005   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:10.916031   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:12.918167   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 25
	I1204 16:06:12.918180   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:12.918231   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:12.919250   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:12.919324   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:12.919331   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:12.919342   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:12.919348   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:12.919357   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:12.919368   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:12.919376   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:12.919383   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:12.919389   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:12.919395   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:12.919405   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:12.919419   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:12.919434   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:12.919442   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:12.919448   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:12.919464   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:12.919477   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:12.919485   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:12.919490   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:14.921596   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 26
	I1204 16:06:14.921609   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:14.921671   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:14.922686   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:14.922747   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:14.922757   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:14.922766   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:14.922771   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:14.922781   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:14.922789   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:14.922799   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:14.922805   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:14.922812   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:14.922818   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:14.922824   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:14.922831   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:14.922837   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:14.922843   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:14.922850   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:14.922856   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:14.922863   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:14.922869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:14.922874   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:16.923598   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 27
	I1204 16:06:16.923610   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:16.923655   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:16.924663   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:16.924738   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:16.924755   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:16.924769   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:16.924792   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:16.924802   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:16.924815   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:16.924822   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:16.924831   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:16.924848   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:16.924859   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:16.924874   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:16.924882   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:16.924889   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:16.924897   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:16.924905   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:16.924912   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:16.924920   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:16.924925   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:16.924939   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:18.927040   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 28
	I1204 16:06:18.927052   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:18.927098   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:18.928431   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:18.928514   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:18.928525   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:18.928534   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:18.928542   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:18.928557   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:18.928580   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:18.928591   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:18.928610   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:18.928620   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:18.928631   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:18.928639   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:18.928647   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:18.928659   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:18.928667   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:18.928674   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:18.928681   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:18.928688   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:18.928696   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:18.928703   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:20.929252   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 29
	I1204 16:06:20.929267   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:20.929330   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:20.930382   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for ce:7d:63:7c:36:a8 in /var/db/dhcpd_leases ...
	I1204 16:06:20.930470   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:06:20.930479   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:06:20.930489   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:06:20.930495   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:06:20.930501   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:06:20.930508   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:06:20.930514   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:06:20.930520   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:06:20.930536   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:06:20.930550   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:06:20.930561   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:06:20.930569   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:06:20.930580   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:06:20.930586   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:06:20.930593   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:06:20.930600   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:06:20.930607   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:06:20.930614   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:06:20.930631   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:06:22.930889   22359 client.go:171] duration metric: took 1m0.852279585s to LocalClient.Create
	I1204 16:06:24.933045   22359 start.go:128] duration metric: took 1m2.888510355s to createHost
	I1204 16:06:24.933058   22359 start.go:83] releasing machines lock for "force-systemd-env-608000", held for 1m2.888644485s
	W1204 16:06:24.933075   22359 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:7d:63:7c:36:a8
	I1204 16:06:24.933412   22359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:06:24.933440   22359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:06:24.944563   22359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60564
	I1204 16:06:24.944905   22359 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:06:24.945299   22359 main.go:141] libmachine: Using API Version  1
	I1204 16:06:24.945315   22359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:06:24.945548   22359 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:06:24.945909   22359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:06:24.945939   22359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:06:24.956827   22359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60566
	I1204 16:06:24.957176   22359 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:06:24.957534   22359 main.go:141] libmachine: Using API Version  1
	I1204 16:06:24.957548   22359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:06:24.957765   22359 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:06:24.957882   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .GetState
	I1204 16:06:24.957982   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:24.958045   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:24.959268   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .DriverName
	I1204 16:06:24.996399   22359 out.go:177] * Deleting "force-systemd-env-608000" in hyperkit ...
	I1204 16:06:25.038500   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .Remove
	I1204 16:06:25.038628   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:25.038637   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:25.038701   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:25.039876   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:25.039949   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | waiting for graceful shutdown
	I1204 16:06:26.040212   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:26.040300   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:26.041600   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | waiting for graceful shutdown
	I1204 16:06:27.042388   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:27.042455   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:27.043941   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | waiting for graceful shutdown
	I1204 16:06:28.045869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:28.045921   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:28.046646   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | waiting for graceful shutdown
	I1204 16:06:29.046824   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:29.046899   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:29.048089   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | waiting for graceful shutdown
	I1204 16:06:30.048293   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:30.048383   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22382
	I1204 16:06:30.049062   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | sending sigkill
	I1204 16:06:30.049070   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:06:30.061797   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:06:30 WARN : hyperkit: failed to read stdout: EOF
	I1204 16:06:30.061822   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:06:30 WARN : hyperkit: failed to read stderr: EOF
	W1204 16:06:30.082617   22359 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:7d:63:7c:36:a8
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:7d:63:7c:36:a8
	I1204 16:06:30.082634   22359 start.go:729] Will try again in 5 seconds ...
	I1204 16:06:35.084851   22359 start.go:360] acquireMachinesLock for force-systemd-env-608000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 16:07:27.831052   22359 start.go:364] duration metric: took 52.744583085s to acquireMachinesLock for "force-systemd-env-608000"
	I1204 16:07:27.831089   22359 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 16:07:27.831142   22359 start.go:125] createHost starting for "" (driver="hyperkit")
	I1204 16:07:27.852954   22359 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 16:07:27.853055   22359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 16:07:27.853079   22359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 16:07:27.864254   22359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60570
	I1204 16:07:27.864586   22359 main.go:141] libmachine: () Calling .GetVersion
	I1204 16:07:27.865018   22359 main.go:141] libmachine: Using API Version  1
	I1204 16:07:27.865043   22359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 16:07:27.865300   22359 main.go:141] libmachine: () Calling .GetMachineName
	I1204 16:07:27.865427   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .GetMachineName
	I1204 16:07:27.865533   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .DriverName
	I1204 16:07:27.865643   22359 start.go:159] libmachine.API.Create for "force-systemd-env-608000" (driver="hyperkit")
	I1204 16:07:27.865660   22359 client.go:168] LocalClient.Create starting
	I1204 16:07:27.865691   22359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem
	I1204 16:07:27.865754   22359 main.go:141] libmachine: Decoding PEM data...
	I1204 16:07:27.865769   22359 main.go:141] libmachine: Parsing certificate...
	I1204 16:07:27.865813   22359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem
	I1204 16:07:27.865861   22359 main.go:141] libmachine: Decoding PEM data...
	I1204 16:07:27.865871   22359 main.go:141] libmachine: Parsing certificate...
	I1204 16:07:27.865886   22359 main.go:141] libmachine: Running pre-create checks...
	I1204 16:07:27.865892   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .PreCreateCheck
	I1204 16:07:27.865967   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:27.865993   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .GetConfigRaw
	I1204 16:07:27.894414   22359 main.go:141] libmachine: Creating machine...
	I1204 16:07:27.894423   22359 main.go:141] libmachine: (force-systemd-env-608000) Calling .Create
	I1204 16:07:27.894517   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:27.894714   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:07:27.894511   22431 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 16:07:27.894775   22359 main.go:141] libmachine: (force-systemd-env-608000) Downloading /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1204 16:07:28.244433   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:07:28.244356   22431 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/id_rsa...
	I1204 16:07:28.364739   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:07:28.364685   22431 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk...
	I1204 16:07:28.364749   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Writing magic tar header
	I1204 16:07:28.364761   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Writing SSH key tar header
	I1204 16:07:28.365066   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | I1204 16:07:28.365042   22431 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000 ...
	I1204 16:07:28.745639   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:28.745669   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid
	I1204 16:07:28.745694   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Using UUID 9097f102-8203-447d-81d1-7924d2b604df
	I1204 16:07:28.768732   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Generated MAC 76:f1:26:5c:66:7c
	I1204 16:07:28.768756   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000
	I1204 16:07:28.768798   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9097f102-8203-447d-81d1-7924d2b604df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:07:28.768833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9097f102-8203-447d-81d1-7924d2b604df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 16:07:28.768902   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9097f102-8203-447d-81d1-7924d2b604df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/for
ce-systemd-env-608000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000"}
	I1204 16:07:28.768953   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9097f102-8203-447d-81d1-7924d2b604df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/force-systemd-env-608000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/bzimage,/Users/jenkins/minikube-integrat
ion/20045-17258/.minikube/machines/force-systemd-env-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-608000"
	I1204 16:07:28.768971   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 16:07:28.771962   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 DEBUG: hyperkit: Pid is 22441
	I1204 16:07:28.772406   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 0
	I1204 16:07:28.772420   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:28.772497   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:28.773633   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:28.773773   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:28.773796   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:28.773820   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:28.773833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:28.773840   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:28.773851   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:28.773858   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:28.773863   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:28.773869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:28.773881   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:28.773891   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:28.773926   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:28.773939   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:28.773947   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:28.773956   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:28.773968   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:28.773999   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:28.774010   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:28.774018   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:28.782289   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 16:07:28.790827   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/force-systemd-env-608000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 16:07:28.791782   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:07:28.791809   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:07:28.791820   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:07:28.791833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:07:29.180368   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 16:07:29.180383   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 16:07:29.295015   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 16:07:29.295035   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 16:07:29.295077   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 16:07:29.295094   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 16:07:29.295893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 16:07:29.295903   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 16:07:30.775931   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 1
	I1204 16:07:30.775945   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:30.775998   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:30.777006   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:30.777094   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:30.777109   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:30.777118   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:30.777127   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:30.777159   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:30.777170   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:30.777182   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:30.777190   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:30.777202   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:30.777210   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:30.777216   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:30.777222   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:30.777229   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:30.777236   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:30.777252   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:30.777264   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:30.777273   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:30.777281   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:30.777292   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:32.778833   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 2
	I1204 16:07:32.778849   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:32.778922   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:32.779906   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:32.780001   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:32.780010   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:32.780018   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:32.780025   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:32.780041   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:32.780054   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:32.780068   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:32.780074   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:32.780081   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:32.780087   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:32.780095   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:32.780124   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:32.780138   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:32.780148   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:32.780155   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:32.780167   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:32.780179   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:32.780188   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:32.780195   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:34.659569   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 16:07:34.659628   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 16:07:34.659637   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 16:07:34.679547   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | 2024/12/04 16:07:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 16:07:34.780272   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 3
	I1204 16:07:34.780287   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:34.780423   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:34.781385   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:34.781498   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:34.781508   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:34.781524   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:34.781531   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:34.781538   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:34.781546   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:34.781552   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:34.781559   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:34.781569   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:34.781576   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:34.781592   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:34.781605   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:34.781626   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:34.781636   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:34.781645   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:34.781652   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:34.781666   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:34.781678   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:34.781686   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:36.782587   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 4
	I1204 16:07:36.782605   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:36.782717   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:36.783703   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:36.783789   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:36.783799   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:36.783807   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:36.783814   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:36.783820   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:36.783831   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:36.783843   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:36.783850   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:36.783877   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:36.783893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:36.783910   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:36.783921   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:36.783929   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:36.783937   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:36.783944   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:36.783952   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:36.783959   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:36.783964   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:36.783972   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:38.786087   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 5
	I1204 16:07:38.786100   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:38.786160   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:38.787177   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:38.787290   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:38.787299   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:38.787306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:38.787312   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:38.787318   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:38.787325   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:38.787336   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:38.787346   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:38.787352   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:38.787367   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:38.787380   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:38.787389   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:38.787396   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:38.787435   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:38.787464   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:38.787472   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:38.787481   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:38.787499   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:38.787511   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:40.787990   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 6
	I1204 16:07:40.788003   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:40.788042   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:40.789012   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:40.789152   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:40.789163   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:40.789169   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:40.789178   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:40.789199   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:40.789211   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:40.789233   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:40.789246   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:40.789254   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:40.789262   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:40.789269   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:40.789276   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:40.789289   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:40.789298   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:40.789306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:40.789325   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:40.789336   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:40.789345   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:40.789354   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:42.790783   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 7
	I1204 16:07:42.790798   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:42.790856   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:42.791851   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:42.791940   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:42.791948   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:42.791955   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:42.791961   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:42.791973   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:42.791981   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:42.792008   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:42.792016   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:42.792024   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:42.792032   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:42.792047   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:42.792058   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:42.792072   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:42.792080   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:42.792090   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:42.792098   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:42.792104   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:42.792112   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:42.792121   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:44.793026   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 8
	I1204 16:07:44.793041   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:44.793108   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:44.794119   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:44.794219   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:44.794229   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:44.794236   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:44.794254   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:44.794262   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:44.794268   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:44.794283   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:44.794292   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:44.794300   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:44.794308   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:44.794315   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:44.794321   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:44.794339   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:44.794353   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:44.794364   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:44.794372   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:44.794380   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:44.794394   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:44.794403   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:46.795880   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 9
	I1204 16:07:46.795894   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:46.795970   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:46.796952   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:46.797055   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:46.797082   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:46.797090   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:46.797095   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:46.797101   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:46.797106   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:46.797122   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:46.797131   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:46.797142   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:46.797155   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:46.797167   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:46.797182   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:46.797205   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:46.797217   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:46.797226   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:46.797233   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:46.797240   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:46.797248   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:46.797263   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:48.798497   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 10
	I1204 16:07:48.798509   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:48.798569   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:48.799542   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:48.799627   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:48.799636   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:48.799644   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:48.799653   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:48.799659   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:48.799670   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:48.799680   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:48.799688   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:48.799704   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:48.799718   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:48.799733   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:48.799746   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:48.799767   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:48.799785   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:48.799794   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:48.799801   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:48.799808   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:48.799816   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:48.799824   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:50.801400   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 11
	I1204 16:07:50.801419   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:50.801483   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:50.802457   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:50.802548   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:50.802556   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:50.802562   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:50.802574   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:50.802594   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:50.802606   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:50.802613   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:50.802622   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:50.802644   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:50.802653   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:50.802677   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:50.802685   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:50.802692   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:50.802700   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:50.802714   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:50.802726   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:50.802746   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:50.802758   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:50.802776   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:52.803121   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 12
	I1204 16:07:52.803136   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:52.803183   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:52.804199   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:52.804287   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:52.804296   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:52.804311   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:52.804317   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:52.804331   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:52.804337   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:52.804343   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:52.804349   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:52.804355   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:52.804361   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:52.804377   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:52.804392   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:52.804405   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:52.804418   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:52.804431   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:52.804439   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:52.804444   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:52.804457   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:52.804463   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:54.804687   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 13
	I1204 16:07:54.804700   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:54.804747   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:54.805711   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:54.805802   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:54.805813   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:54.805829   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:54.805843   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:54.805852   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:54.805858   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:54.805865   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:54.805872   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:54.805890   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:54.805903   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:54.805911   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:54.805916   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:54.805923   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:54.805929   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:54.805935   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:54.805951   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:54.805962   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:54.805972   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:54.805978   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:56.807530   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 14
	I1204 16:07:56.807545   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:56.807611   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:56.808831   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:56.808915   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:56.808922   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:56.808930   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:56.808936   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:56.808942   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:56.808948   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:56.808954   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:56.808959   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:56.808973   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:56.808983   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:56.808989   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:56.809007   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:56.809027   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:56.809036   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:56.809043   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:56.809048   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:56.809055   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:56.809062   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:56.809079   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:07:58.810894   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 15
	I1204 16:07:58.810907   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:07:58.810993   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:07:58.812040   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:07:58.812123   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:07:58.812137   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:07:58.812145   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:07:58.812156   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:07:58.812170   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:07:58.812188   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:07:58.812232   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:07:58.812248   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:07:58.812263   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:07:58.812275   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:07:58.812285   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:07:58.812291   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:07:58.812306   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:07:58.812321   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:07:58.812332   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:07:58.812340   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:07:58.812347   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:07:58.812352   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:07:58.812364   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:00.814390   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 16
	I1204 16:08:00.814403   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:00.814473   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:00.815453   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:00.815540   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:00.815550   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:00.815563   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:00.815572   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:00.815580   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:00.815586   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:00.815593   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:00.815598   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:00.815604   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:00.815610   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:00.815633   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:00.815644   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:00.815654   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:00.815661   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:00.815673   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:00.815682   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:00.815689   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:00.815698   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:00.815715   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:02.815777   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 17
	I1204 16:08:02.815789   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:02.815901   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:02.816891   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:02.816927   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:02.816949   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:02.816968   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:02.816980   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:02.816993   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:02.817000   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:02.817010   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:02.817021   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:02.817029   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:02.817043   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:02.817050   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:02.817055   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:02.817070   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:02.817079   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:02.817096   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:02.817113   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:02.817130   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:02.817139   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:02.817148   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:04.819264   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 18
	I1204 16:08:04.819280   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:04.819289   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:04.820305   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:04.820397   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:04.820411   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:04.820420   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:04.820428   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:04.820455   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:04.820465   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:04.820474   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:04.820484   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:04.820499   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:04.820508   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:04.820522   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:04.820529   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:04.820537   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:04.820543   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:04.820551   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:04.820557   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:04.820564   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:04.820575   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:04.820583   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:06.821259   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 19
	I1204 16:08:06.821273   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:06.821333   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:06.822384   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:06.822474   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:06.822484   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:06.822492   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:06.822498   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:06.822504   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:06.822516   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:06.822522   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:06.822529   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:06.822535   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:06.822540   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:06.822546   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:06.822551   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:06.822559   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:06.822570   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:06.822577   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:06.822583   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:06.822588   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:06.822607   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:06.822618   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:08.822775   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 20
	I1204 16:08:08.822787   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:08.822875   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:08.823871   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:08.823930   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:08.823940   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:08.823953   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:08.823962   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:08.823969   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:08.823977   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:08.823983   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:08.823991   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:08.823998   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:08.824003   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:08.824016   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:08.824030   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:08.824049   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:08.824060   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:08.824067   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:08.824073   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:08.824080   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:08.824086   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:08.824094   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:10.824611   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 21
	I1204 16:08:10.824626   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:10.824689   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:10.825764   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:10.825805   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:10.825816   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:10.825843   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:10.825852   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:10.825861   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:10.825869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:10.825882   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:10.825891   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:10.825897   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:10.825912   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:10.825919   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:10.825926   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:10.825934   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:10.825942   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:10.825948   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:10.825954   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:10.825963   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:10.825981   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:10.825988   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:12.827532   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 22
	I1204 16:08:12.827546   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:12.827636   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:12.828668   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:12.828750   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:12.828758   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:12.828766   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:12.828974   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:12.828988   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:12.829021   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:12.829036   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:12.829045   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:12.829053   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:12.829059   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:12.829066   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:12.829075   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:12.829090   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:12.829100   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:12.829110   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:12.829118   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:12.829130   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:12.829147   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:12.829161   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:14.831242   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 23
	I1204 16:08:14.831257   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:14.831300   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:14.832331   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:14.832423   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:14.832457   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:14.832465   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:14.832474   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:14.832488   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:14.832501   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:14.832509   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:14.832518   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:14.832525   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:14.832539   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:14.832555   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:14.832567   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:14.832577   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:14.832583   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:14.832612   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:14.832624   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:14.832632   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:14.832642   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:14.832659   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:16.833169   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 24
	I1204 16:08:16.833182   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:16.833227   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:16.834205   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:16.834324   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:16.834334   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:16.834341   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:16.834346   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:16.834353   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:16.834359   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:16.834371   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:16.834378   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:16.834384   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:16.834392   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:16.834419   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:16.834431   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:16.834439   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:16.834445   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:16.834451   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:16.834458   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:16.834471   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:16.834480   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:16.834489   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:18.833653   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 25
	I1204 16:08:18.833668   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:18.833687   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:18.834676   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:18.834785   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:18.834793   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:18.834800   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:18.834806   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:18.834811   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:18.834818   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:18.834840   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:18.834853   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:18.834862   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:18.834869   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:18.834875   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:18.834883   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:18.834899   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:18.834912   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:18.834920   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:18.834930   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:18.834941   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:18.834952   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:18.834961   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:20.831034   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 26
	I1204 16:08:20.831051   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:20.831113   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:20.832140   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:20.832236   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:20.832246   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:20.832253   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:20.832265   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:20.832302   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:20.832314   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:20.832324   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:20.832332   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:20.832341   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:20.832348   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:20.832357   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:20.832367   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:20.832375   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:20.832387   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:20.832397   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:20.832405   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:20.832412   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:20.832423   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:20.832431   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:22.828729   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 27
	I1204 16:08:22.828742   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:22.828795   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:22.829893   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:22.829943   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:22.829955   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:22.829963   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:22.829976   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:22.829984   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:22.829989   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:22.829995   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:22.830002   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:22.830009   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:22.830025   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:22.830037   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:22.830046   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:22.830054   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:22.830066   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:22.830076   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:22.830092   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:22.830105   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:22.830120   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:22.830128   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:24.827236   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 28
	I1204 16:08:24.827251   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:24.827295   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:24.828453   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:24.828554   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:24.828592   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:24.828605   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:24.828626   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:24.828638   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:24.828647   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:24.828661   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:24.828668   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:24.828676   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:24.828683   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:24.828690   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:24.828696   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:24.828704   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:24.828712   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:24.828720   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:24.828729   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:24.828737   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:24.828743   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:24.828749   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:26.827249   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Attempt 29
	I1204 16:08:26.827261   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 16:08:26.827309   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | hyperkit pid from json: 22441
	I1204 16:08:26.828367   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Searching for 76:f1:26:5c:66:7c in /var/db/dhcpd_leases ...
	I1204 16:08:26.828424   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I1204 16:08:26.828434   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:1e:8d:c2:3c:32:e4 ID:1,1e:8d:c2:3c:32:e4 Lease:0x6750fbac}
	I1204 16:08:26.828452   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:fa:db:9f:5a:55:77 ID:1,fa:db:9f:5a:55:77 Lease:0x6750faec}
	I1204 16:08:26.828459   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:49:3b:6f:a8:75 ID:1,82:49:3b:6f:a8:75 Lease:0x6750fa52}
	I1204 16:08:26.828466   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ea:6c:f6:24:a7:1c ID:1,ea:6c:f6:24:a7:1c Lease:0x6750eba8}
	I1204 16:08:26.828472   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:0d:49:fe:04:ec ID:1,92:d:49:fe:4:ec Lease:0x6750fa10}
	I1204 16:08:26.828485   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:2e:c1:72:c2:8c:52 ID:1,2e:c1:72:c2:8c:52 Lease:0x6750f9d4}
	I1204 16:08:26.828495   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:be:a4:4e:cc:46:eb ID:1,be:a4:4e:cc:46:eb Lease:0x6750e9a0}
	I1204 16:08:26.828503   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:52:62:44:b2:45:9a ID:1,52:62:44:b2:45:9a Lease:0x6750f769}
	I1204 16:08:26.828511   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:9e:45:96:2d:a8:93 ID:1,9e:45:96:2d:a8:93 Lease:0x6750f709}
	I1204 16:08:26.828520   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:16:14:a9:0f:3c:1a ID:1,16:14:a9:f:3c:1a Lease:0x6750f6b7}
	I1204 16:08:26.828528   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:f6:f1:1e:09:c4:d0 ID:1,f6:f1:1e:9:c4:d0 Lease:0x6750f651}
	I1204 16:08:26.828556   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750f60f}
	I1204 16:08:26.828564   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750e76e}
	I1204 16:08:26.828572   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f5d6}
	I1204 16:08:26.828581   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f5ab}
	I1204 16:08:26.828588   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:7e:88:6b:de:a2:10 ID:1,7e:88:6b:de:a2:10 Lease:0x6750f266}
	I1204 16:08:26.828593   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:56:ea:fb:8f:4f:d5 ID:1,56:ea:fb:8f:4f:d5 Lease:0x6750f19d}
	I1204 16:08:26.828601   22359 main.go:141] libmachine: (force-systemd-env-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:b6:eb:fa:b5:f1:f1 ID:1,b6:eb:fa:b5:f1:f1 Lease:0x6750f012}
	I1204 16:08:28.827560   22359 client.go:171] duration metric: took 1m0.983875737s to LocalClient.Create
	I1204 16:08:30.827013   22359 start.go:128] duration metric: took 1m3.020588837s to createHost
	I1204 16:08:30.827026   22359 start.go:83] releasing machines lock for "force-systemd-env-608000", held for 1m3.02068403s
	W1204 16:08:30.827107   22359 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-608000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:f1:26:5c:66:7c
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-608000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:f1:26:5c:66:7c
	I1204 16:08:30.890351   22359 out.go:201] 
	W1204 16:08:30.911189   22359 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:f1:26:5c:66:7c
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:f1:26:5c:66:7c
	W1204 16:08:30.911209   22359 out.go:270] * 
	* 
	W1204 16:08:30.912012   22359 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 16:08:30.974164   22359 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-608000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-608000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-608000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (204.012624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-608000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-608000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-04 16:08:31.304269 -0800 PST m=+3362.021532458
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-608000 -n force-systemd-env-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-608000 -n force-systemd-env-608000: exit status 7 (99.989098ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:08:31.401440   22484 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:08:31.401465   22484 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-608000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-608000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-608000: (5.271759329s)
--- FAIL: TestForceSystemdEnv (232.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-098000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-098000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-098000 -v=7 --alsologtostderr: (27.123004288s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-098000 --wait=true -v=7 --alsologtostderr
E1204 15:33:37.662489   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:35:53.798339   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-098000 --wait=true -v=7 --alsologtostderr: exit status 90 (3m11.03965328s)

                                                
                                                
-- stdout --
	* [ha-098000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-098000" primary control-plane node in "ha-098000" cluster
	* Restarting existing hyperkit VM for "ha-098000" ...
	* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	* Enabled addons: 
	
	* Starting "ha-098000-m02" control-plane node in "ha-098000" cluster
	* Restarting existing hyperkit VM for "ha-098000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-098000-m03" control-plane node in "ha-098000" cluster
	* Restarting existing hyperkit VM for "ha-098000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-098000-m04" worker node in "ha-098000" cluster
	* Restarting existing hyperkit VM for "ha-098000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:32:53.124576   20196 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:53.124878   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.124886   20196 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:53.124892   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.125142   20196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:32:53.126967   20196 out.go:352] Setting JSON to false
	I1204 15:32:53.159313   20196 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5543,"bootTime":1733349630,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:32:53.159464   20196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:53.181549   20196 out.go:177] * [ha-098000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:32:53.224271   20196 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:53.224311   20196 notify.go:220] Checking for updates...
	I1204 15:32:53.267840   20196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:32:53.289126   20196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:32:53.310338   20196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:53.331010   20196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:32:53.352255   20196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:53.373929   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:53.374098   20196 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:53.374835   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.374907   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.386958   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58600
	I1204 15:32:53.387294   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.387686   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.387699   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.387905   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.388016   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.418809   20196 out.go:177] * Using the hyperkit driver based on existing profile
	I1204 15:32:53.461003   20196 start.go:297] selected driver: hyperkit
	I1204 15:32:53.461036   20196 start.go:901] validating driver "hyperkit" against &{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.461290   20196 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:53.461477   20196 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.461727   20196 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:32:53.473875   20196 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:32:53.481311   20196 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.481337   20196 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:32:53.486904   20196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:53.486942   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:32:53.486987   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:32:53.487059   20196 start.go:340] cluster config:
	{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.487162   20196 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.508071   20196 out.go:177] * Starting "ha-098000" primary control-plane node in "ha-098000" cluster
	I1204 15:32:53.529205   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:32:53.529292   20196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 15:32:53.529312   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:32:53.529537   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:32:53.529555   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:32:53.529727   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.530635   20196 start.go:360] acquireMachinesLock for ha-098000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:53.530735   20196 start.go:364] duration metric: took 76.824µs to acquireMachinesLock for "ha-098000"
	I1204 15:32:53.530765   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:32:53.530784   20196 fix.go:54] fixHost starting: 
	I1204 15:32:53.531293   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.531320   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.542703   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58602
	I1204 15:32:53.543046   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.543457   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.543473   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.543695   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.543798   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.543917   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:32:53.544005   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.544085   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 19294
	I1204 15:32:53.545215   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.545258   20196 fix.go:112] recreateIfNeeded on ha-098000: state=Stopped err=<nil>
	I1204 15:32:53.545275   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	W1204 15:32:53.545373   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:32:53.586803   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000" ...
	I1204 15:32:53.608028   20196 main.go:141] libmachine: (ha-098000) Calling .Start
	I1204 15:32:53.608287   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.608354   20196 main.go:141] libmachine: (ha-098000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid
	I1204 15:32:53.610773   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.610786   20196 main.go:141] libmachine: (ha-098000) DBG | pid 19294 is in state "Stopped"
	I1204 15:32:53.610801   20196 main.go:141] libmachine: (ha-098000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid...
	I1204 15:32:53.611292   20196 main.go:141] libmachine: (ha-098000) DBG | Using UUID 70106e4e-8082-4c46-9279-8221d5ed18af
	I1204 15:32:53.728648   20196 main.go:141] libmachine: (ha-098000) DBG | Generated MAC 46:3b:47:9c:31:41
	I1204 15:32:53.728673   20196 main.go:141] libmachine: (ha-098000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:32:53.728953   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.728996   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.729068   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70106e4e-8082-4c46-9279-8221d5ed18af", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:32:53.729113   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70106e4e-8082-4c46-9279-8221d5ed18af -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:32:53.729129   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:32:53.730591   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Pid is 20209
	I1204 15:32:53.731014   20196 main.go:141] libmachine: (ha-098000) DBG | Attempt 0
	I1204 15:32:53.731028   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.731114   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:32:53.732978   20196 main.go:141] libmachine: (ha-098000) DBG | Searching for 46:3b:47:9c:31:41 in /var/db/dhcpd_leases ...
	I1204 15:32:53.733030   20196 main.go:141] libmachine: (ha-098000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:32:53.733053   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:32:53.733076   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:32:53.733086   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:32:53.733096   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f369}
	I1204 15:32:53.733112   20196 main.go:141] libmachine: (ha-098000) DBG | Found match: 46:3b:47:9c:31:41
	I1204 15:32:53.733119   20196 main.go:141] libmachine: (ha-098000) DBG | IP: 192.169.0.5
	I1204 15:32:53.733163   20196 main.go:141] libmachine: (ha-098000) Calling .GetConfigRaw
	I1204 15:32:53.733987   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:32:53.734258   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.734730   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:32:53.734741   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.734939   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:32:53.735075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:32:53.735212   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735339   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735471   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:32:53.735700   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:32:53.735888   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:32:53.735897   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:32:53.741792   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:32:53.798085   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:32:53.799084   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:53.799132   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:53.799147   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:53.799159   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.212915   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:32:54.212930   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:32:54.327517   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:54.327538   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:54.327567   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:54.327585   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.328504   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:32:54.328518   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:00.053293   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:00.053310   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:00.053327   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:00.080441   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:04.805929   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:04.805956   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806123   20196 buildroot.go:166] provisioning hostname "ha-098000"
	I1204 15:33:04.806135   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806234   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.806337   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.806431   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806539   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806630   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.806774   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.806928   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.806937   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000 && echo "ha-098000" | sudo tee /etc/hostname
	I1204 15:33:04.881527   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000
	
	I1204 15:33:04.881546   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.881688   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.881782   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881867   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881972   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.882116   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.882259   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.882270   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:04.951908   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:04.951928   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:04.951941   20196 buildroot.go:174] setting up certificates
	I1204 15:33:04.951947   20196 provision.go:84] configureAuth start
	I1204 15:33:04.951953   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.952087   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:04.952194   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.952301   20196 provision.go:143] copyHostCerts
	I1204 15:33:04.952333   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952388   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:04.952396   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952514   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:04.952739   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952770   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:04.952775   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952846   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:04.953021   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953050   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:04.953054   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953117   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:04.953299   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000 san=[127.0.0.1 192.169.0.5 ha-098000 localhost minikube]
	I1204 15:33:05.029495   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:05.029569   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:05.029587   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.029725   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.029828   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.029935   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.030021   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:05.069556   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:05.069632   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:05.088502   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:05.088560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 15:33:05.107211   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:05.107270   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:05.127045   20196 provision.go:87] duration metric: took 175.080758ms to configureAuth
	I1204 15:33:05.127060   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:05.127241   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:05.127255   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:05.127390   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.127495   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.127590   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127810   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.127983   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.128112   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.128119   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:05.194828   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:05.194840   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:05.194934   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:05.194945   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.195075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.195184   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195275   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195365   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.195540   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.195677   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.195720   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:05.269411   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:05.269434   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.269574   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.269679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269784   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269878   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.270029   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.270180   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.270192   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:06.947784   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:06.947801   20196 machine.go:96] duration metric: took 13.212685267s to provisionDockerMachine
	I1204 15:33:06.947813   20196 start.go:293] postStartSetup for "ha-098000" (driver="hyperkit")
	I1204 15:33:06.947820   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:06.947830   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:06.948036   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:06.948057   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:06.948150   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:06.948258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:06.948370   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:06.948484   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:06.990689   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:06.994074   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:06.994089   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:06.994206   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:06.994349   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:06.994356   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:06.994521   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:07.005479   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:07.040997   20196 start.go:296] duration metric: took 93.160395ms for postStartSetup
	I1204 15:33:07.041019   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.041214   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:07.041227   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.041320   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.041401   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.041488   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.041577   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.079449   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:07.079522   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:07.131796   20196 fix.go:56] duration metric: took 13.600616251s for fixHost
	I1204 15:33:07.131819   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.131964   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.132056   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132147   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.132400   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:07.132541   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:07.132548   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:07.198066   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355187.085615924
	
	I1204 15:33:07.198080   20196 fix.go:216] guest clock: 1733355187.085615924
	I1204 15:33:07.198085   20196 fix.go:229] Guest: 2024-12-04 15:33:07.085615924 -0800 PST Remote: 2024-12-04 15:33:07.131808 -0800 PST m=+14.052161483 (delta=-46.192076ms)
	I1204 15:33:07.198107   20196 fix.go:200] guest clock delta is within tolerance: -46.192076ms
	I1204 15:33:07.198113   20196 start.go:83] releasing machines lock for "ha-098000", held for 13.666979222s
	I1204 15:33:07.198132   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198272   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:07.198375   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198673   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198785   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198878   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:07.198921   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.198947   20196 ssh_runner.go:195] Run: cat /version.json
	I1204 15:33:07.198968   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.199026   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199093   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199123   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199209   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199228   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199298   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.199315   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199396   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.233868   20196 ssh_runner.go:195] Run: systemctl --version
	I1204 15:33:07.278985   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:33:07.283423   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:07.283478   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:07.298510   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:07.298524   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.298651   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.315201   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:07.324137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:07.332963   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.333027   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:07.341883   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.350757   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:07.359678   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.368612   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:07.377607   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:07.386447   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:07.395124   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:07.404070   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:07.412097   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:07.412157   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:07.421208   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:07.429418   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.524346   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:07.542570   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.542668   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:07.559288   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.569950   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:07.583434   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.593916   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.603881   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:07.624337   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.634820   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.649640   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:07.652619   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:07.659817   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:07.673288   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:07.772876   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:07.878665   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.878744   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:07.892585   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.986161   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:10.248338   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.262094537s)
	I1204 15:33:10.248412   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:10.259004   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:10.272350   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.282710   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:10.373201   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:10.481588   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.590503   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:10.604294   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.614461   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.704083   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:10.769517   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:10.769615   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:10.774192   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:10.774266   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:10.777449   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:10.800815   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:10.800899   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.817205   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.856841   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:10.856890   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:10.857354   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:10.862069   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:10.871775   20196 kubeadm.go:883] updating cluster {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 15:33:10.871875   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:10.871949   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.885784   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.885796   20196 docker.go:619] Images already preloaded, skipping extraction
	I1204 15:33:10.885882   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.904423   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.904444   20196 cache_images.go:84] Images are preloaded, skipping loading
	I1204 15:33:10.904450   20196 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1204 15:33:10.904531   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:10.904612   20196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:33:10.937949   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:33:10.937963   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:33:10.937974   20196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:33:10.938009   20196 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-098000 NodeName:ha-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:33:10.938085   20196 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-098000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:33:10.938101   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:10.938174   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:10.950599   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:10.950678   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:10.950747   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:10.959008   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:10.959066   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 15:33:10.966355   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1204 15:33:10.979785   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:10.993124   20196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1204 15:33:11.007280   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:11.020699   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:11.023569   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:11.032639   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:11.133629   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:11.148832   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.5
	I1204 15:33:11.148845   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:11.148855   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.149029   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:11.149085   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:11.149095   20196 certs.go:256] generating profile certs ...
	I1204 15:33:11.149184   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:11.149204   20196 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330
	I1204 15:33:11.149219   20196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1204 15:33:11.369000   20196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 ...
	I1204 15:33:11.369023   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330: {Name:mkee72feeeccd665b141717d3a28fdfb2c7bde31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369371   20196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 ...
	I1204 15:33:11.369381   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330: {Name:mk73951855cf52179c105169e788f46cc4d39a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369660   20196 certs.go:381] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt
	I1204 15:33:11.369853   20196 certs.go:385] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key
	I1204 15:33:11.370068   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:11.370078   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:11.370100   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:11.370120   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:11.370139   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:11.370157   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:11.370176   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:11.370196   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:11.370213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:11.370295   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:11.370331   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:11.370340   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:11.370387   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:11.370418   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:11.370453   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:11.370519   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:11.370552   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.370573   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.370591   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.371058   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:11.399000   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:11.441701   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:11.476788   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:11.508692   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:11.528963   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:11.548308   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:11.567414   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:11.586589   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:11.605437   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:11.624356   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:11.643314   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:33:11.656890   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:11.661063   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:11.670050   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673329   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673378   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.677431   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:11.686327   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:11.695205   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698569   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698616   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.702683   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:11.711573   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:11.720441   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723730   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723772   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.727893   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:11.736772   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:11.740128   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:11.744800   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:11.749129   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:11.753890   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:11.758287   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:11.762608   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:11.766918   20196 kubeadm.go:392] StartCluster: {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:11.767041   20196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:33:11.779240   20196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:33:11.787479   20196 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:33:11.787491   20196 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:33:11.787539   20196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:33:11.796840   20196 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:33:11.797140   20196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-098000" does not appear in /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.797223   20196 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-17258/kubeconfig needs updating (will repair): [kubeconfig missing "ha-098000" cluster setting kubeconfig missing "ha-098000" context setting]
	I1204 15:33:11.797420   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.797819   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.798024   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:33:11.798341   20196 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 15:33:11.798533   20196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:33:11.806274   20196 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1204 15:33:11.806292   20196 kubeadm.go:597] duration metric: took 18.792967ms to restartPrimaryControlPlane
	I1204 15:33:11.806299   20196 kubeadm.go:394] duration metric: took 39.384435ms to StartCluster
	I1204 15:33:11.806313   20196 settings.go:142] acquiring lock: {Name:mk99ad63e4feda725ee10448138b299c26bf8cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.806400   20196 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.806790   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.807009   20196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:11.807022   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:11.807035   20196 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:33:11.807145   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.850133   20196 out.go:177] * Enabled addons: 
	I1204 15:33:11.871157   20196 addons.go:510] duration metric: took 64.116535ms for enable addons: enabled=[]
	I1204 15:33:11.871244   20196 start.go:246] waiting for cluster config update ...
	I1204 15:33:11.871256   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:11.894284   20196 out.go:201] 
	I1204 15:33:11.915277   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.915378   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.939339   20196 out.go:177] * Starting "ha-098000-m02" control-plane node in "ha-098000" cluster
	I1204 15:33:11.981186   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:11.981222   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:11.981421   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:11.981442   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:11.981558   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.982398   20196 start.go:360] acquireMachinesLock for ha-098000-m02: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:11.982475   20196 start.go:364] duration metric: took 58.776µs to acquireMachinesLock for "ha-098000-m02"
	I1204 15:33:11.982495   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:11.982501   20196 fix.go:54] fixHost starting: m02
	I1204 15:33:11.982818   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:11.982845   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:11.994288   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58624
	I1204 15:33:11.994640   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:11.995007   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:11.995021   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:11.995253   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:11.995373   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:11.995490   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:33:11.995578   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:11.995648   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20139
	I1204 15:33:11.996810   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:11.996835   20196 fix.go:112] recreateIfNeeded on ha-098000-m02: state=Stopped err=<nil>
	I1204 15:33:11.996847   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	W1204 15:33:11.996942   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:12.039213   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m02" ...
	I1204 15:33:12.060086   20196 main.go:141] libmachine: (ha-098000-m02) Calling .Start
	I1204 15:33:12.060346   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.060380   20196 main.go:141] libmachine: (ha-098000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid
	I1204 15:33:12.061608   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:12.061617   20196 main.go:141] libmachine: (ha-098000-m02) DBG | pid 20139 is in state "Stopped"
	I1204 15:33:12.061626   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid...
	I1204 15:33:12.061806   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Using UUID 2486faac-afab-449a-8055-5ee234f7d16f
	I1204 15:33:12.086653   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Generated MAC b2:39:f5:23:0b:32
	I1204 15:33:12.086676   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:12.086820   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086851   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086887   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2486faac-afab-449a-8055-5ee234f7d16f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:12.086920   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2486faac-afab-449a-8055-5ee234f7d16f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:12.086929   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:12.088450   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Pid is 20220
	I1204 15:33:12.088937   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Attempt 0
	I1204 15:33:12.088953   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.089027   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:33:12.090875   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Searching for b2:39:f5:23:0b:32 in /var/db/dhcpd_leases ...
	I1204 15:33:12.090963   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:12.090982   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:12.091003   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:12.091026   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:33:12.091037   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found match: b2:39:f5:23:0b:32
	I1204 15:33:12.091047   20196 main.go:141] libmachine: (ha-098000-m02) DBG | IP: 192.169.0.6
	I1204 15:33:12.091078   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetConfigRaw
	I1204 15:33:12.091745   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:12.091957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:12.092493   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:12.092503   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:12.092649   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:12.092776   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:12.092901   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093004   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093096   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:12.093267   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:12.093463   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:12.093473   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:12.099465   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:12.108663   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:12.109633   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.109661   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.109674   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.109689   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.508437   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:12.508452   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:12.623247   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.623267   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.623283   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.623289   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.624086   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:12.624095   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:18.362951   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 15:33:18.362990   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 15:33:18.362997   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 15:33:18.387781   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 15:33:23.149238   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:23.149254   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149403   20196 buildroot.go:166] provisioning hostname "ha-098000-m02"
	I1204 15:33:23.149415   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149509   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.149612   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.149697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149796   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149882   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.150012   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.150165   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.150173   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m02 && echo "ha-098000-m02" | sudo tee /etc/hostname
	I1204 15:33:23.207677   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m02
	
	I1204 15:33:23.207693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.207831   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.207942   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208053   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208156   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.208340   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.208503   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.208515   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:23.265398   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:23.265414   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:23.265426   20196 buildroot.go:174] setting up certificates
	I1204 15:33:23.265434   20196 provision.go:84] configureAuth start
	I1204 15:33:23.265443   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.265604   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:23.265696   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.265792   20196 provision.go:143] copyHostCerts
	I1204 15:33:23.265821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.265868   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:23.265874   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.266044   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:23.266308   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266347   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:23.266352   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266606   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:23.266780   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266810   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:23.266815   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266891   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:23.267067   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m02 san=[127.0.0.1 192.169.0.6 ha-098000-m02 localhost minikube]
	I1204 15:33:23.418588   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:23.418649   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:23.418663   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.418794   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.418895   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.418994   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.419094   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:23.449777   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:23.449845   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:23.469736   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:23.469808   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:23.489512   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:23.489573   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:23.509353   20196 provision.go:87] duration metric: took 243.902721ms to configureAuth
	I1204 15:33:23.509367   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:23.509536   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:23.509550   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:23.509693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.509787   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.509886   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.509981   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.510059   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.510190   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.510321   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.510328   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:23.557917   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:23.557929   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:23.558018   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:23.558034   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.558154   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.558255   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558337   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558428   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.558600   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.558722   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.558764   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:23.619577   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:23.619599   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.619741   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.619853   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.619941   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.620042   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.620196   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.620336   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.620348   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:25.265062   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:25.265078   20196 machine.go:96] duration metric: took 13.172205227s to provisionDockerMachine
	I1204 15:33:25.265092   20196 start.go:293] postStartSetup for "ha-098000-m02" (driver="hyperkit")
	I1204 15:33:25.265099   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:25.265111   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.265311   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:25.265332   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.265441   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.265529   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.265633   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.265739   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.304266   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:25.311180   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:25.311193   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:25.311283   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:25.311424   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:25.311431   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:25.311607   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:25.324859   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:25.357942   20196 start.go:296] duration metric: took 92.839826ms for postStartSetup
	I1204 15:33:25.357966   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.358160   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:25.358173   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.358261   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.358352   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.358436   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.358521   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.389685   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:25.389754   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:25.422337   20196 fix.go:56] duration metric: took 13.439453986s for fixHost
	I1204 15:33:25.422364   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.422533   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.422647   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422735   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422815   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.422958   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:25.423099   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:25.423107   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:25.472632   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355205.621764225
	
	I1204 15:33:25.472647   20196 fix.go:216] guest clock: 1733355205.621764225
	I1204 15:33:25.472652   20196 fix.go:229] Guest: 2024-12-04 15:33:25.621764225 -0800 PST Remote: 2024-12-04 15:33:25.422353 -0800 PST m=+32.342189685 (delta=199.411225ms)
	I1204 15:33:25.472663   20196 fix.go:200] guest clock delta is within tolerance: 199.411225ms
	I1204 15:33:25.472667   20196 start.go:83] releasing machines lock for "ha-098000-m02", held for 13.489803052s
	I1204 15:33:25.472697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.472837   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:25.496277   20196 out.go:177] * Found network options:
	I1204 15:33:25.537194   20196 out.go:177]   - NO_PROXY=192.169.0.5
	W1204 15:33:25.558335   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.558422   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559432   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559728   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559899   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:25.559950   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	W1204 15:33:25.560026   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.560173   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:25.560212   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.560218   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560413   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560435   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560588   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560653   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560755   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560803   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.560929   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	W1204 15:33:25.589676   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:25.589750   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:25.635633   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:25.635654   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.635765   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.651707   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:25.660095   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:25.668588   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:25.668650   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:25.676830   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.685079   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:25.693509   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.701733   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:25.710137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:25.718450   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:25.726929   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:25.735114   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:25.742569   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:25.742622   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:25.751585   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:25.759751   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:25.851537   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:25.870178   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.870261   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:25.886777   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.898631   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:25.915954   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.927090   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.937345   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:25.958314   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.968609   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.983636   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:25.986491   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:25.993508   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:26.006712   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:26.100912   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:26.190828   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:26.190859   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:26.204976   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:26.305524   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:28.666691   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.361082583s)
	I1204 15:33:28.666774   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:28.677849   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:28.691293   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:28.702315   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:28.804235   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:28.895456   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.008598   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:29.022244   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:29.033285   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.123647   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:29.194113   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:29.194213   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:29.198266   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:29.198329   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:29.201217   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:29.226480   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:29.226574   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.245410   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.286251   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:29.327924   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:33:29.348859   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:29.349296   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:29.353761   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.363356   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:33:29.363524   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:29.363748   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.363768   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.374807   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58646
	I1204 15:33:29.375120   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.375473   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.375491   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.375697   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.375799   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:33:29.375885   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:29.375946   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:33:29.377121   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:33:29.377369   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.377393   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.388419   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58648
	I1204 15:33:29.388721   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.389015   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.389049   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.389281   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.389378   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:29.389495   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.6
	I1204 15:33:29.389501   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:29.389513   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:29.389656   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:29.389710   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:29.389719   20196 certs.go:256] generating profile certs ...
	I1204 15:33:29.389811   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:29.389878   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.3ecf7e1a
	I1204 15:33:29.389931   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:29.389938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:29.389964   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:29.389985   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:29.390009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:29.390029   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:29.390048   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:29.390067   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:29.390086   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:29.390163   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:29.390207   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:29.390215   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:29.390250   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:29.390285   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:29.390316   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:29.390382   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:29.390418   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.390439   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.390458   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.390483   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:29.390568   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:29.390658   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:29.390751   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:29.390833   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:29.422140   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:33:29.425696   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:33:29.434269   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:33:29.437377   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:33:29.446042   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:33:29.449183   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:33:29.457490   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:33:29.460647   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:33:29.469352   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:33:29.472755   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:33:29.481093   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:33:29.484099   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:33:29.492651   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:29.513068   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:29.533396   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:29.553633   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:29.573360   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:29.592833   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:29.612325   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:29.631705   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:29.651772   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:29.671647   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:29.691028   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:29.710680   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:33:29.724088   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:33:29.738048   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:33:29.751781   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:33:29.765280   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:33:29.779127   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:33:29.792641   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:33:29.806335   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:29.810643   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:29.819095   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822486   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822534   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.826729   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:29.835308   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:29.843890   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847451   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847503   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.851708   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:29.859922   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:29.868147   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871612   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871654   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.875808   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:29.884074   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:29.887539   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:29.891899   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:29.896170   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:29.900557   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:29.904814   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:29.909235   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:29.913504   20196 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1204 15:33:29.913564   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:29.913578   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:29.913625   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:29.926130   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:29.926164   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:29.926229   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:29.933952   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:29.934013   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:33:29.941532   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:33:29.955276   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:29.968570   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:29.982327   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:29.985248   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.994738   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.085095   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.100297   20196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:30.100505   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:30.121980   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:33:30.163546   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.296003   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.317056   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:30.317267   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:33:30.317312   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:33:30.317488   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:30.317571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:30.317576   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:30.317583   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:30.317592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.429719   20196 round_trippers.go:574] Response Status: 200 OK in 8111 milliseconds
	I1204 15:33:38.437420   20196 node_ready.go:49] node "ha-098000-m02" has status "Ready":"True"
	I1204 15:33:38.437441   20196 node_ready.go:38] duration metric: took 8.119707596s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:38.437450   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:38.437502   20196 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:33:38.437515   20196 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:33:38.437571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:38.437578   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.437593   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.437599   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.455661   20196 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 15:33:38.464148   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.464210   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:33:38.464215   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.464221   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.464224   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.470699   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.471292   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.471302   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.471308   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.471312   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.481534   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:33:38.481959   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.481970   20196 pod_ready.go:82] duration metric: took 17.803771ms for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.481977   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.482020   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:33:38.482026   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.482032   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.482035   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.487605   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.488267   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.488322   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.488329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.488343   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.490575   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.491180   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.491192   20196 pod_ready.go:82] duration metric: took 9.208421ms for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491202   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491280   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:33:38.491287   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.491293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.491297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.494530   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:38.495165   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.495173   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.495180   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.495184   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.499549   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.499961   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.499972   20196 pod_ready.go:82] duration metric: took 8.763238ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.499980   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.500023   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:33:38.500028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.500034   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.500039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.506409   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.506828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:38.506837   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.506843   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.506846   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.511940   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.512316   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.512327   20196 pod_ready.go:82] duration metric: took 12.340986ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512334   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512373   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:33:38.512378   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.512384   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.512389   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.516730   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.638087   20196 request.go:632] Waited for 120.794515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638124   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638130   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.638161   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.638169   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.640203   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.640614   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.640625   20196 pod_ready.go:82] duration metric: took 128.282ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.640638   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.838617   20196 request.go:632] Waited for 197.931176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838688   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838697   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.838706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.838712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.840867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.037679   20196 request.go:632] Waited for 196.178205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037714   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037719   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.037772   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.037777   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.042421   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:39.042726   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.042736   20196 pod_ready.go:82] duration metric: took 402.080499ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.042743   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.237786   20196 request.go:632] Waited for 195.001118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237820   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237825   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.237830   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.237835   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.243495   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:39.437668   20196 request.go:632] Waited for 193.740455ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437706   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.437712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.437719   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.440123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.440472   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.440482   20196 pod_ready.go:82] duration metric: took 397.72282ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.440490   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.638172   20196 request.go:632] Waited for 197.630035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638227   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638235   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.638277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.638301   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.641465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.837863   20196 request.go:632] Waited for 195.844278ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837914   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837923   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.838008   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.838017   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.841077   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.841414   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.841423   20196 pod_ready.go:82] duration metric: took 400.91619ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.841431   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.037805   20196 request.go:632] Waited for 196.32052ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037845   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.037851   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.037857   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.040255   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.238963   20196 request.go:632] Waited for 198.140778ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.239040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.239045   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.242092   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:40.242401   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.242411   20196 pod_ready.go:82] duration metric: took 400.963216ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.242419   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.438693   20196 request.go:632] Waited for 196.229899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438729   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438735   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.438741   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.438745   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.441139   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.637709   20196 request.go:632] Waited for 196.13524ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637752   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637777   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.637783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.637787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.640278   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.640704   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.640714   20196 pod_ready.go:82] duration metric: took 398.278068ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.640722   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.838825   20196 request.go:632] Waited for 198.055929ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838901   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838908   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.838927   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.838932   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.841541   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.037964   20196 request.go:632] Waited for 195.880635ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038037   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038043   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.038049   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.038054   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.041754   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.042231   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.042241   20196 pod_ready.go:82] duration metric: took 401.502224ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.042248   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.237873   20196 request.go:632] Waited for 195.582123ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237946   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237952   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.237957   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.237961   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.240730   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.438126   20196 request.go:632] Waited for 196.947205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438157   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438167   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.438207   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.438212   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.440777   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.441074   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.441084   20196 pod_ready.go:82] duration metric: took 398.818652ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.441091   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.639164   20196 request.go:632] Waited for 198.003801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639309   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639320   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.639331   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.639338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.643045   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.838863   20196 request.go:632] Waited for 195.192063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838912   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838924   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.838946   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.838954   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.842314   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.842750   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.842763   20196 pod_ready.go:82] duration metric: took 401.652541ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.842771   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.039281   20196 request.go:632] Waited for 196.459472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039417   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039428   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.039439   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.039447   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.042816   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.238811   20196 request.go:632] Waited for 195.378249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238885   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238891   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.238898   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.238903   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.240764   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.241072   20196 pod_ready.go:93] pod "kube-proxy-mz4q2" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.241084   20196 pod_ready.go:82] duration metric: took 398.294263ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.241092   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.438843   20196 request.go:632] Waited for 197.705446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438898   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.438905   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.438908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.440868   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.638818   20196 request.go:632] Waited for 197.361352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638884   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638895   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.638906   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.638914   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.642158   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.642556   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.642569   20196 pod_ready.go:82] duration metric: took 401.459636ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.642580   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.839526   20196 request.go:632] Waited for 196.890487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839713   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.839724   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.839732   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.843198   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.037789   20196 request.go:632] Waited for 194.105591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037944   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037961   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.037975   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.037982   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.041343   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.041920   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.041933   20196 pod_ready.go:82] duration metric: took 399.3347ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.041942   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.239892   20196 request.go:632] Waited for 197.874831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239961   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239969   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.239983   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.239991   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.243085   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.438099   20196 request.go:632] Waited for 194.176391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438141   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438168   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.438176   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.438185   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.440115   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:43.440578   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.440586   20196 pod_ready.go:82] duration metric: took 398.625667ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.440601   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.639811   20196 request.go:632] Waited for 199.133254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639908   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639919   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.639930   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.639940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.643164   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.839903   20196 request.go:632] Waited for 196.135821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839967   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839976   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.839987   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.839994   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.843566   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.844161   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.844175   20196 pod_ready.go:82] duration metric: took 403.555453ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.844208   20196 pod_ready.go:39] duration metric: took 5.406590624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:43.844253   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:33:43.844326   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:33:43.855983   20196 api_server.go:72] duration metric: took 13.755275558s to wait for apiserver process to appear ...
	I1204 15:33:43.855995   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:33:43.856010   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:33:43.860186   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:33:43.860225   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:33:43.860230   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.860243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.860246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.860683   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:33:43.860804   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:33:43.860815   20196 api_server.go:131] duration metric: took 4.815788ms to wait for apiserver health ...
	I1204 15:33:43.860824   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:33:44.038297   20196 request.go:632] Waited for 177.420142ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038399   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.038411   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.038421   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.044078   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.049007   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:33:44.049023   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.049029   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.049032   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.049034   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.049038   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.049041   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.049043   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.049046   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.049049   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.049051   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.049054   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.049056   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.049059   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.049069   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.049073   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.049075   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.049078   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.049080   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.049084   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.049087   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.049089   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.049092   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.049094   20196 system_pods.go:61] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.049097   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.049099   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.049102   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.049106   20196 system_pods.go:74] duration metric: took 188.271977ms to wait for pod list to return data ...
	I1204 15:33:44.049112   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:33:44.239205   20196 request.go:632] Waited for 190.005694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239263   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239272   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.239283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.239322   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.243527   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.243704   20196 default_sa.go:45] found service account: "default"
	I1204 15:33:44.243713   20196 default_sa.go:55] duration metric: took 194.591962ms for default service account to be created ...
	I1204 15:33:44.243719   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:33:44.439115   20196 request.go:632] Waited for 195.322716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439246   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.439258   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.439264   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.444755   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.449718   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:33:44.449733   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.449738   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.449741   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.449744   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.449748   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.449750   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.449753   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.449755   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.449758   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.449761   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.449765   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.449768   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.449771   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.449774   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.449777   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.449783   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.449786   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.449789   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.449793   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.449795   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.449798   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.449801   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.449804   20196 system_pods.go:89] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.449806   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.449810   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.449813   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.449818   20196 system_pods.go:126] duration metric: took 206.089298ms to wait for k8s-apps to be running ...
	I1204 15:33:44.449823   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:33:44.449890   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:33:44.461452   20196 system_svc.go:56] duration metric: took 11.623487ms WaitForService to wait for kubelet
	I1204 15:33:44.461466   20196 kubeadm.go:582] duration metric: took 14.360743481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:33:44.461484   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:33:44.639462   20196 request.go:632] Waited for 177.925125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639548   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.639560   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.639568   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.643595   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.644812   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644828   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644839   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644849   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644858   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644861   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644864   20196 node_conditions.go:105] duration metric: took 183.370218ms to run NodePressure ...
	I1204 15:33:44.644872   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:44.644890   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:44.665849   20196 out.go:201] 
	I1204 15:33:44.687912   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:44.688042   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.710522   20196 out.go:177] * Starting "ha-098000-m03" control-plane node in "ha-098000" cluster
	I1204 15:33:44.752466   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:44.752500   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:44.752679   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:44.752697   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:44.752830   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.753998   20196 start.go:360] acquireMachinesLock for ha-098000-m03: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:44.754068   20196 start.go:364] duration metric: took 52.377µs to acquireMachinesLock for "ha-098000-m03"
	I1204 15:33:44.754085   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:44.754091   20196 fix.go:54] fixHost starting: m03
	I1204 15:33:44.754406   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:44.754430   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:44.765918   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58653
	I1204 15:33:44.766304   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:44.766704   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:44.766719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:44.766938   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:44.767056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.767166   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetState
	I1204 15:33:44.767251   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.767322   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 19347
	I1204 15:33:44.768480   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.768517   20196 fix.go:112] recreateIfNeeded on ha-098000-m03: state=Stopped err=<nil>
	I1204 15:33:44.768528   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	W1204 15:33:44.768610   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:44.789653   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m03" ...
	I1204 15:33:44.831751   20196 main.go:141] libmachine: (ha-098000-m03) Calling .Start
	I1204 15:33:44.832023   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.832066   20196 main.go:141] libmachine: (ha-098000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid
	I1204 15:33:44.834593   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.834606   20196 main.go:141] libmachine: (ha-098000-m03) DBG | pid 19347 is in state "Stopped"
	I1204 15:33:44.834626   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid...
	I1204 15:33:44.835523   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Using UUID eac2e001-90c5-40d6-830d-b844e6baedeb
	I1204 15:33:44.861764   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Generated MAC 56:f8:e7:bc:e7:07
	I1204 15:33:44.861784   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:44.862005   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862041   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862100   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "eac2e001-90c5-40d6-830d-b844e6baedeb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:44.862139   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U eac2e001-90c5-40d6-830d-b844e6baedeb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:44.862604   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:44.864474   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Pid is 20231
	I1204 15:33:44.864862   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Attempt 0
	I1204 15:33:44.864878   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.864933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 20231
	I1204 15:33:44.866074   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Searching for 56:f8:e7:bc:e7:07 in /var/db/dhcpd_leases ...
	I1204 15:33:44.866145   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:44.866158   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:33:44.866167   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:44.866177   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:44.866182   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:33:44.866187   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found match: 56:f8:e7:bc:e7:07
	I1204 15:33:44.866193   20196 main.go:141] libmachine: (ha-098000-m03) DBG | IP: 192.169.0.7
	I1204 15:33:44.866266   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetConfigRaw
	I1204 15:33:44.866960   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:44.867187   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.867733   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:44.867748   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.867880   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:44.867991   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:44.868083   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868175   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868275   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:44.868449   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:44.868607   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:44.868615   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:44.875700   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:44.885221   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:44.886534   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:44.886590   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:44.886624   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:44.886641   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.310864   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:45.310888   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:45.426378   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:45.426408   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:45.426418   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:45.426427   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.427201   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:45.427213   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:51.200443   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:51.200513   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:51.200524   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:51.225933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:55.935290   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:55.935305   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935436   20196 buildroot.go:166] provisioning hostname "ha-098000-m03"
	I1204 15:33:55.935445   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935551   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:55.935640   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:55.935732   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935825   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935912   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:55.936073   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:55.936205   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:55.936213   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m03 && echo "ha-098000-m03" | sudo tee /etc/hostname
	I1204 15:33:56.008649   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m03
	
	I1204 15:33:56.008663   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.008821   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.008915   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009001   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009093   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.009247   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.009386   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.009397   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:56.076925   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:56.076941   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:56.076950   20196 buildroot.go:174] setting up certificates
	I1204 15:33:56.076956   20196 provision.go:84] configureAuth start
	I1204 15:33:56.076962   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:56.077121   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:56.077219   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.077318   20196 provision.go:143] copyHostCerts
	I1204 15:33:56.077346   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077405   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:56.077411   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077538   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:56.077740   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077775   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:56.077780   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077851   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:56.078007   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078036   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:56.078041   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078135   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:56.078295   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m03 san=[127.0.0.1 192.169.0.7 ha-098000-m03 localhost minikube]
	I1204 15:33:56.184360   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:56.184421   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:56.184436   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.184584   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.184682   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.184788   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.184878   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:56.222358   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:56.222423   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:56.242527   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:56.242598   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:56.262411   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:56.262492   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:33:56.282604   20196 provision.go:87] duration metric: took 205.634097ms to configureAuth
	I1204 15:33:56.282619   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:56.282802   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:56.282816   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:56.282954   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.283056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.283161   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283267   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283366   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.283498   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.283620   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.283628   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:56.345040   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:56.345053   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:56.345129   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:56.345143   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.345280   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.345367   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345443   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.345668   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.345805   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.345851   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:56.424345   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:56.424363   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.424517   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.424685   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424787   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424878   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.425031   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.425156   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.425173   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:58.122525   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:58.122539   20196 machine.go:96] duration metric: took 13.254423135s to provisionDockerMachine
	I1204 15:33:58.122547   20196 start.go:293] postStartSetup for "ha-098000-m03" (driver="hyperkit")
	I1204 15:33:58.122554   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:58.122566   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.122762   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:58.122783   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.122871   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.122946   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.123045   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.123137   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.161639   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:58.164739   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:58.164749   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:58.164831   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:58.164968   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:58.164974   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:58.165140   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:58.173027   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:58.192093   20196 start.go:296] duration metric: took 69.536473ms for postStartSetup
	I1204 15:33:58.192114   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.192306   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:58.192320   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.192414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.192509   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.192600   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.192674   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.230841   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:58.230926   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:58.265220   20196 fix.go:56] duration metric: took 13.510737637s for fixHost
	I1204 15:33:58.265271   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.265414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.265524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265620   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265713   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.265865   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:58.266013   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:58.266021   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:58.330663   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355238.486070391
	
	I1204 15:33:58.330676   20196 fix.go:216] guest clock: 1733355238.486070391
	I1204 15:33:58.330682   20196 fix.go:229] Guest: 2024-12-04 15:33:58.486070391 -0800 PST Remote: 2024-12-04 15:33:58.265237 -0800 PST m=+65.184150423 (delta=220.833391ms)
	I1204 15:33:58.330692   20196 fix.go:200] guest clock delta is within tolerance: 220.833391ms
	I1204 15:33:58.330696   20196 start.go:83] releasing machines lock for "ha-098000-m03", held for 13.576240131s
	I1204 15:33:58.330714   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.330854   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:58.352510   20196 out.go:177] * Found network options:
	I1204 15:33:58.380745   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1204 15:33:58.401983   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402013   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402029   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402504   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402654   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402766   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:58.402819   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	W1204 15:33:58.402881   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402902   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402977   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403000   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:58.403012   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.403174   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403214   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403349   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403358   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403564   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.403575   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403741   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	W1204 15:33:58.437750   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:58.437828   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:58.485243   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:58.485257   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.485329   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.514237   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:58.528266   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:58.539804   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:58.539880   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:58.555961   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.566195   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:58.575257   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.584192   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:58.593620   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:58.603021   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:58.612370   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:58.621502   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:58.630294   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:58.630368   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:58.640300   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:58.648626   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:58.742860   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:58.760057   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.760138   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:58.778296   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.793165   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:58.807402   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.818936   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.829930   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:58.849768   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.861249   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.876335   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:58.879342   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:58.887395   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:58.901271   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:59.012726   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:59.108627   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:59.108651   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:59.122518   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:59.224950   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:01.525196   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.300161441s)
	I1204 15:34:01.525275   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:34:01.537533   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:34:01.552928   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.564251   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:34:01.666308   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:34:01.762184   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.857672   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:34:01.871507   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.882955   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.972213   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:34:02.036955   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:34:02.037050   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:34:02.042796   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:34:02.042875   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:34:02.046431   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:34:02.073232   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:34:02.073324   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.089702   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.126985   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:34:02.168586   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:34:02.190567   20196 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1204 15:34:02.211577   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:34:02.211977   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:34:02.216597   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.226113   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:34:02.226314   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:02.226550   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.226577   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.238043   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58675
	I1204 15:34:02.238357   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.238749   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.238766   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.238998   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.239102   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:34:02.239217   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:02.239287   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:34:02.240505   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:34:02.240770   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.240796   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.252028   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58677
	I1204 15:34:02.252346   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.252700   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.252719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.252937   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.253032   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:34:02.253139   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.7
	I1204 15:34:02.253146   20196 certs.go:194] generating shared ca certs ...
	I1204 15:34:02.253156   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:02.253308   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:34:02.253362   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:34:02.253371   20196 certs.go:256] generating profile certs ...
	I1204 15:34:02.253468   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:34:02.253856   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.d946d3b4
	I1204 15:34:02.253925   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:34:02.253938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:34:02.253962   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:34:02.253983   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:34:02.254009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:34:02.254028   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:34:02.254046   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:34:02.254065   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:34:02.254082   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:34:02.254159   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:34:02.254203   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:34:02.254211   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:34:02.254246   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:34:02.254278   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:34:02.254310   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:34:02.254374   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:34:02.254409   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.254429   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.254447   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.254475   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:34:02.254562   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:34:02.254640   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:34:02.254716   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:34:02.254794   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:34:02.285982   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:34:02.289453   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:34:02.298834   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:34:02.302369   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:34:02.315418   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:34:02.318593   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:34:02.327312   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:34:02.330564   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:34:02.339456   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:34:02.342515   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:34:02.351231   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:34:02.354286   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:34:02.363156   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:34:02.384838   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:34:02.405926   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:34:02.426535   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:34:02.446742   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:34:02.466560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:34:02.486853   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:34:02.507184   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:34:02.528073   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:34:02.548964   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:34:02.569347   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:34:02.589426   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:34:02.603866   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:34:02.617657   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:34:02.631813   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:34:02.645494   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:34:02.659961   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:34:02.673777   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:34:02.687446   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:34:02.691739   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:34:02.700420   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.703973   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.704042   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.708497   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:34:02.717646   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:34:02.726542   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.729989   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.730041   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.734277   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:34:02.742686   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:34:02.751027   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754461   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754515   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.758843   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:34:02.767465   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:34:02.770903   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:34:02.776086   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:34:02.780679   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:34:02.785121   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:34:02.789654   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:34:02.794116   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:34:02.798756   20196 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1204 15:34:02.798834   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:34:02.798851   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:34:02.798902   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:34:02.811676   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:34:02.811716   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:34:02.811802   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:34:02.820056   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:34:02.820120   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:34:02.827634   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:34:02.840903   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:34:02.854283   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:34:02.867957   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:34:02.870915   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.880410   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:02.978715   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:02.992761   20196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:34:02.992956   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:03.013320   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:34:03.055094   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:03.162591   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:03.175308   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:34:03.175517   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:34:03.175556   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:34:03.175722   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.175774   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:03.175780   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.175788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.175793   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.177877   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.178182   20196 node_ready.go:49] node "ha-098000-m03" has status "Ready":"True"
	I1204 15:34:03.178191   20196 node_ready.go:38] duration metric: took 2.460684ms for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.178204   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:03.178249   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:03.178255   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.178261   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.178265   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.181589   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:03.187858   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:03.187917   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.187923   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.187928   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.187931   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.190071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.190536   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.190544   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.190550   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.190553   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.192357   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:03.689890   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.689913   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.689960   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.689970   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.692722   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.693137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.693145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.693150   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.693154   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.694862   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.188595   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.188612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.188618   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.188622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.190926   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.191442   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.191451   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.191457   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.191460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.193377   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.689410   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.689427   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.689433   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.689436   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.691829   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.692311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.692320   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.692326   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.692329   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.694756   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.188051   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.188069   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.188075   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.188079   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.190537   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.191234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.191244   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.191250   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.191254   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.193184   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:05.193754   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:05.689571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.689583   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.689589   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.689592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.692119   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.693045   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.693054   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.693060   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.693070   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.695078   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.188182   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.188196   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.188203   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.188206   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.190803   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:06.191335   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.191343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.191353   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.191358   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.193354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.688125   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.688144   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.688150   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.688153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.698567   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:34:06.699659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.699669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.699674   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.699678   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.702231   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.188129   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.188142   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.188149   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.188152   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.190314   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.190783   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.190793   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.190799   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.190803   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.192721   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.689429   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.689444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.689450   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.689453   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.691383   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.691809   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.691816   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.691822   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.691827   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.693593   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.693894   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:08.189338   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.189353   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.189361   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.189365   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.191565   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.192110   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.192118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.192124   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.192134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.193879   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:08.689140   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.689155   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.689194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.689198   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.691672   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.692190   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.692197   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.692203   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.692206   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.694257   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.189377   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.189396   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.189399   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.191765   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.192318   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.192326   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.192333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.192337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.194226   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:09.688422   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.688435   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.688441   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.688445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.690918   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.691538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.691546   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.691552   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.691556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.693405   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.188400   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.188426   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.188438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.188445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.191226   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.191923   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.191930   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.191936   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.191940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.193682   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.194054   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:10.689544   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.689566   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.689601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.689607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.692171   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.692830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.692842   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.692848   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.692852   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.694354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:11.188970   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.188983   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.188989   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.188992   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.193348   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:11.193835   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.193844   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.193850   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.193854   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.195899   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.688737   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.688752   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.688758   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.688761   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691007   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.691483   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.691491   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.691496   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691500   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.693198   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.188889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.188972   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.188986   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.188999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.192039   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:12.192581   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.192589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.192595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.192598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.194300   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.194673   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:12.688761   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.688869   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.688880   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.688888   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.691475   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:12.692022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.692029   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.692035   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.692039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.693737   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.190399   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.190424   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.190436   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.190441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.193795   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:13.194709   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.194717   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.194722   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.194725   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.196228   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.688349   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.688361   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.688367   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.688370   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.690278   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.690775   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.690783   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.690788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.690792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.692350   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.189443   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.189461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.189470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.189474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.191713   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:14.192328   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.192336   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.192341   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.192345   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.194132   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.689369   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.689471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.689487   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.689522   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.693058   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:14.693755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.693762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.693768   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.693771   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.695478   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.695986   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:15.189753   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.189777   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.189833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.189842   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.193300   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:15.193825   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.193835   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.193842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.193848   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.195490   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:15.688564   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.688589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.688600   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.688607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.691559   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:15.692137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.692145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.692152   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.692156   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.693792   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.188974   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.188991   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.188999   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.189003   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.191876   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.192266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.192273   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.192279   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.192283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.193909   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.689589   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.689601   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.689607   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.689609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.691735   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.692340   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.692348   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.692354   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.692364   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.694139   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.188693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.188719   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.188730   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.188737   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.192306   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.192880   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.192888   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.192893   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.192896   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.194607   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.194930   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:17.689803   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.689822   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.689833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.689840   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.692900   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.693582   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.693596   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.693600   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.695568   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.189872   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.189891   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.189903   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.189909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.193143   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.193659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.193669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.193677   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.193682   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.195539   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.689089   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.689110   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.689121   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.689128   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.692465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.693092   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.693099   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.693105   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.693109   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.694811   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.188836   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.188866   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.188885   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.188893   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191083   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:19.191481   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.191489   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.191494   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191498   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.193210   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.688920   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.689019   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.689034   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.689040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692204   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:19.692887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.692895   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.692901   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.694482   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.694834   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:20.189463   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.189482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.189495   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.189507   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.192820   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.193489   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.193497   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.193503   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.193506   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.195170   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:20.689312   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.689335   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.689345   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.689353   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.692898   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.693406   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.693413   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.693419   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.693435   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.695237   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.189479   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.189499   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.189511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.189519   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.192490   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.193119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.193127   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.193132   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.193136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.194670   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.689574   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.689589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.689595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.689598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.691684   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.692133   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.692140   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.692145   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.692156   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.694020   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.189311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.189327   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.189334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.189337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.191942   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.192424   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.192432   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.192438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.192441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.194080   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.194500   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:22.689269   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.689284   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.689293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.689297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.691724   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.692389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.692397   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.692404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.692407   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.694417   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.188903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.188937   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.188944   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.188948   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.191281   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.191769   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.191776   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.191783   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.191786   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.193597   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.689658   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.689673   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.689682   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.689688   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.692154   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.692597   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.692605   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.692611   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.692614   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.694442   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.190414   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.190439   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.190448   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.190453   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.193694   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:24.194336   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.194343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.194349   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.194352   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.196204   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.196507   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:24.689283   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.689324   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.689334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.689339   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.691786   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:24.692252   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.692260   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.692265   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.692269   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.694045   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.189972   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.189988   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.189995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.189997   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.192150   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.192590   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.192598   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.192604   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.192607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.194554   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.689840   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.689893   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.689902   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.689908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.692432   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.693530   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.693539   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.693545   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.693556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.695085   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.188685   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.188774   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.188787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.188792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.191478   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.191981   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.191990   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.191995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.191998   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.193972   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.689955   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.690060   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.690076   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.690084   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693025   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.693583   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.693596   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.695193   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.695569   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:27.190057   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.190079   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.190096   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.190102   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.193105   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:27.193849   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.193860   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.193868   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.193873   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.195538   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:27.688758   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.688772   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.688779   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.688783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.694666   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:27.695270   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.695278   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.695283   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.695288   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.696913   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.188770   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.188819   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.188832   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.188840   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.191808   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.192403   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.192411   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.192416   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.192420   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.194136   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.689405   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.689487   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.689503   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.689511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.694694   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:28.695230   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.695237   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.695243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.695246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.697820   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.698133   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:29.190106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.190125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.190138   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.190143   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.193071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:29.193687   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.193698   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.193706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.193711   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.195444   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:29.689830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.689849   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.689862   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.689867   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.692977   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:29.693745   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.693753   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.693759   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.693762   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.695525   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.190945   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.190965   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.190976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.190988   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.195195   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:30.195850   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.195859   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.195865   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.195869   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.197592   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.689476   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.689500   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.689510   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.689516   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.692808   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:30.693458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.693466   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.693471   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.693474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.695140   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.189274   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.189404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.189413   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.192545   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.193168   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.193179   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.193186   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.193193   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.194805   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.195157   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:31.690066   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.690125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.690139   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.693489   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.694073   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.694084   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.694093   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.694098   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.695789   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.190294   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.190315   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.190333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193258   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:32.193839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.193846   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.193852   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193856   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.195470   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.689113   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.689137   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.689148   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.689153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.692269   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:32.692828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.692836   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.692842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.692845   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.694429   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.188950   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.188969   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.188980   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.188987   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.191891   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:33.192381   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.192389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.192395   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.192400   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.194337   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.690112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.690134   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.690145   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.690153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.693581   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:33.694215   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.694223   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.694229   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.694232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.696177   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.696454   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:34.189881   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.189900   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.189912   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.189918   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193287   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.193886   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.193897   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.193909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193915   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.195881   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:34.689892   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.689916   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.689931   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.689940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.693606   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.694219   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.694227   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.694234   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.694237   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.696105   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.188973   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.189024   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.189039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.189046   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192172   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.192755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.192763   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.192769   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192772   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.194518   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.690180   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.690201   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.690214   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.690223   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.694006   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.694605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.694612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.694619   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.694622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.696235   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.696565   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:36.189741   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.189767   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.189779   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.189785   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.193344   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.194036   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.194047   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.194055   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.194059   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.195836   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:36.690199   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.690224   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.690236   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.690241   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.693462   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.694091   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.694102   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.694110   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.694116   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.695766   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:37.190287   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.190309   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.190320   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.196511   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:34:37.197043   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.197052   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.197058   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.197061   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.199818   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.690095   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.690118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.690129   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.690136   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.693801   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:37.694618   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.694626   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.694632   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.694636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.696670   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.697007   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:38.190293   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.190317   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.190329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.190338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.194628   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:38.195183   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.195190   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.195196   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.195201   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.197386   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:38.689866   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.689889   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.689900   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.689905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.693601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:38.694401   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.694412   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.694420   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.694426   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.696297   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:39.190990   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.191012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.191024   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.191031   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.198155   20196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 15:34:39.199473   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.199482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.199488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.199493   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.205055   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:39.690106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.690130   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.690142   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.693615   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:39.694445   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.694452   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.694458   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.694462   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.696222   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.189693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:40.189718   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.189731   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.189746   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.193370   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:40.194004   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.194012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.194018   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.194021   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.195604   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.195934   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:40.195944   20196 pod_ready.go:82] duration metric: took 37.007028934s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195952   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195984   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.195989   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.195995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.195999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.197711   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.198120   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.198128   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.198134   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.198136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.199690   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.696200   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.696219   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.696228   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.696232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.698719   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:40.699262   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.699270   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.699277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.699281   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.701563   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.196423   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.196440   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.196446   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.196449   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.199972   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:41.200435   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.200444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.200449   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.200454   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.202156   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:41.696302   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.696325   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.696334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.696376   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.698859   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.699465   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.699474   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.699480   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.699486   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.701569   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.197903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.197925   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.197937   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.197942   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.200867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.201412   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.201420   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.201427   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.201431   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.203130   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.203467   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:42.697162   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.697182   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.697194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.697200   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.700051   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.700562   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.700570   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.700576   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.700579   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.702701   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.703063   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.703073   20196 pod_ready.go:82] duration metric: took 2.507044671s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703080   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703116   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:34:42.703121   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.703129   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.703134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.705021   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.705585   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.705592   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.705598   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.705609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.707581   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.708069   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.708079   20196 pod_ready.go:82] duration metric: took 4.993321ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708086   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708121   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:34:42.708126   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.708131   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.708135   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.710061   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.710514   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:42.710522   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.710528   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.710532   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712173   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.712569   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.712578   20196 pod_ready.go:82] duration metric: took 4.485807ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712584   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712616   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:34:42.712621   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.712627   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712630   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714463   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.714960   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:42.714968   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.714976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714980   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.716756   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.717063   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.717072   20196 pod_ready.go:82] duration metric: took 4.482301ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717082   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:34:42.717116   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.717122   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.717126   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.718813   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.719178   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.719186   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.719192   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.719196   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.720739   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.721127   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.721135   20196 pod_ready.go:82] duration metric: took 4.047168ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.721141   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.898812   20196 request.go:632] Waited for 177.546709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898865   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898875   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.898884   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.898890   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.901957   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.097426   20196 request.go:632] Waited for 194.940606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097482   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097488   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.097494   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.097498   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.099791   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.100329   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.100338   20196 pod_ready.go:82] duration metric: took 379.181132ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.100345   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.297467   20196 request.go:632] Waited for 197.060564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297531   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297536   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.297542   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.297546   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.299888   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.498066   20196 request.go:632] Waited for 197.627847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498144   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498154   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.498165   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.498171   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.501495   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.501933   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.501946   20196 pod_ready.go:82] duration metric: took 401.584296ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.501955   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.697541   20196 request.go:632] Waited for 195.539974ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697609   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697614   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.697620   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.697624   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.699660   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.897896   20196 request.go:632] Waited for 197.715706ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897988   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897999   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.898011   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.898040   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.901116   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.901493   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.901504   20196 pod_ready.go:82] duration metric: took 399.531331ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.901511   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.097961   20196 request.go:632] Waited for 196.319346ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098031   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.098043   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.098052   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.101549   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.297607   20196 request.go:632] Waited for 195.557496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297743   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297756   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.297766   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.297776   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.301215   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.301821   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.301835   20196 pod_ready.go:82] duration metric: took 400.304316ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.301844   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.497418   20196 request.go:632] Waited for 195.52419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497540   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497551   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.497561   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.497567   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.500605   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.697803   20196 request.go:632] Waited for 196.768583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697874   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697880   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.697886   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.697892   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.699791   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:44.700181   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.700191   20196 pod_ready.go:82] duration metric: took 398.331303ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.700206   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.897582   20196 request.go:632] Waited for 197.274481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897621   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897628   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.897636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.897643   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.899968   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.098303   20196 request.go:632] Waited for 197.936546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.098481   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.098489   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.101906   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.102405   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.102418   20196 pod_ready.go:82] duration metric: took 402.19463ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.102429   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.297787   20196 request.go:632] Waited for 195.298622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297896   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297908   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.297918   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.297924   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.301224   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.497743   20196 request.go:632] Waited for 195.731374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497789   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497798   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.497808   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.497816   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.501296   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.501752   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.501764   20196 pod_ready.go:82] duration metric: took 399.314475ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.501772   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.698321   20196 request.go:632] Waited for 196.486057ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698364   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698368   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.698395   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.698400   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.700678   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.898338   20196 request.go:632] Waited for 197.154497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898437   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898445   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.898454   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.898460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.900811   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.901113   20196 pod_ready.go:98] node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901124   20196 pod_ready.go:82] duration metric: took 399.323564ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	E1204 15:34:45.901130   20196 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901136   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.098348   20196 request.go:632] Waited for 197.16954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098411   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098417   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.098423   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.098428   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.100807   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.298026   20196 request.go:632] Waited for 196.74762ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298086   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298092   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.298098   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.298103   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.300358   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.300719   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.300729   20196 pod_ready.go:82] duration metric: took 399.576022ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.300737   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.497896   20196 request.go:632] Waited for 197.086517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.497983   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.498051   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.498063   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.498071   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.501601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:46.698084   20196 request.go:632] Waited for 195.78543ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.698170   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.698177   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.700251   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.700719   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.700729   20196 pod_ready.go:82] duration metric: took 399.975386ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.700736   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.898629   20196 request.go:632] Waited for 197.83339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898748   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.898773   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.898783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.902413   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.099363   20196 request.go:632] Waited for 196.494986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099466   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099477   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.099488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.099495   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.102564   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.102986   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.102995   20196 pod_ready.go:82] duration metric: took 402.242621ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.103002   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.297846   20196 request.go:632] Waited for 194.795128ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297939   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.297949   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.297953   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.300484   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.498216   20196 request.go:632] Waited for 197.302267ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498358   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.498374   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.498381   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.501722   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.502017   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.502028   20196 pod_ready.go:82] duration metric: took 399.008512ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.502037   20196 pod_ready.go:39] duration metric: took 44.322579822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:47.502061   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:34:47.502149   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:47.513881   20196 api_server.go:72] duration metric: took 44.519844285s to wait for apiserver process to appear ...
	I1204 15:34:47.513892   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:34:47.513909   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:34:47.516967   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:34:47.517003   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:34:47.517008   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.517014   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.517018   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.517533   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:34:47.517562   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:34:47.517569   20196 api_server.go:131] duration metric: took 3.673154ms to wait for apiserver health ...
	I1204 15:34:47.517575   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:34:47.697569   20196 request.go:632] Waited for 179.954091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697611   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.697617   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.697621   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.702548   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:47.707779   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:34:47.707791   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:47.707795   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:47.707798   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:47.707801   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:47.707809   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:47.707813   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:47.707815   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:47.707818   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:47.707821   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:47.707823   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:47.707826   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:47.707830   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:47.707837   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:47.707841   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:47.707844   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:47.707846   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:47.707849   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:47.707851   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:47.707854   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:47.707857   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:47.707860   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:47.707862   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:47.707865   20196 system_pods.go:61] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:47.707867   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:47.707870   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:47.707874   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:47.707879   20196 system_pods.go:74] duration metric: took 190.294933ms to wait for pod list to return data ...
	I1204 15:34:47.707885   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:34:47.897357   20196 request.go:632] Waited for 189.411036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897446   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897455   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.897463   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.897470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.899736   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.899815   20196 default_sa.go:45] found service account: "default"
	I1204 15:34:47.899824   20196 default_sa.go:55] duration metric: took 191.920936ms for default service account to be created ...
	I1204 15:34:47.899831   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:34:48.097563   20196 request.go:632] Waited for 197.602094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097612   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097620   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.097663   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.097675   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.102765   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:48.109211   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:34:48.109362   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:48.109371   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:48.109375   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:48.109379   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:48.109383   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:48.109386   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:48.109389   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:48.109393   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:48.109396   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:48.109400   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:48.109403   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:48.109406   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:48.109409   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:48.109413   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:48.109417   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:48.109419   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:48.109422   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:48.109425   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:48.109428   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:48.109431   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:48.109434   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:48.109437   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:48.109439   20196 system_pods.go:89] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:48.109442   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:48.109445   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:48.109450   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:48.109455   20196 system_pods.go:126] duration metric: took 209.614349ms to wait for k8s-apps to be running ...
	I1204 15:34:48.109461   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:34:48.109531   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:34:48.120276   20196 system_svc.go:56] duration metric: took 10.810365ms WaitForService to wait for kubelet
	I1204 15:34:48.120291   20196 kubeadm.go:582] duration metric: took 45.126238068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:34:48.120303   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:34:48.297415   20196 request.go:632] Waited for 177.05913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297455   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.297469   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.297475   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.300123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:48.300830   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300840   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300847   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300850   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300860   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300862   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300866   20196 node_conditions.go:105] duration metric: took 180.554037ms to run NodePressure ...
	I1204 15:34:48.300874   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:34:48.300889   20196 start.go:255] writing updated cluster config ...
	I1204 15:34:48.322431   20196 out.go:201] 
	I1204 15:34:48.344449   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:48.344580   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.367119   20196 out.go:177] * Starting "ha-098000-m04" worker node in "ha-098000" cluster
	I1204 15:34:48.409090   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:34:48.409115   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:34:48.409244   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:34:48.409257   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:34:48.409347   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.410058   20196 start.go:360] acquireMachinesLock for ha-098000-m04: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:34:48.410126   20196 start.go:364] duration metric: took 51.472µs to acquireMachinesLock for "ha-098000-m04"
	I1204 15:34:48.410144   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:34:48.410150   20196 fix.go:54] fixHost starting: m04
	I1204 15:34:48.410455   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:48.410480   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:48.421860   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58681
	I1204 15:34:48.422147   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:48.422522   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:48.422541   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:48.422736   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:48.422817   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.422956   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:34:48.423067   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.423135   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 19762
	I1204 15:34:48.424293   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid 19762 missing from process table
	I1204 15:34:48.424344   20196 fix.go:112] recreateIfNeeded on ha-098000-m04: state=Stopped err=<nil>
	I1204 15:34:48.424356   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:34:48.424441   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:34:48.445040   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m04" ...
	I1204 15:34:48.535157   20196 main.go:141] libmachine: (ha-098000-m04) Calling .Start
	I1204 15:34:48.535373   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.535405   20196 main.go:141] libmachine: (ha-098000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid
	I1204 15:34:48.535476   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Using UUID 8502617a-13a7-430f-a6ae-7be776245ae1
	I1204 15:34:48.565169   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Generated MAC 7a:59:49:d0:f8:66
	I1204 15:34:48.565217   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:34:48.565376   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565411   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565471   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8502617a-13a7-430f-a6ae-7be776245ae1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:34:48.565528   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8502617a-13a7-430f-a6ae-7be776245ae1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:34:48.565552   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:34:48.566902   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Pid is 20252
	I1204 15:34:48.567481   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Attempt 0
	I1204 15:34:48.567496   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.567619   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:34:48.570453   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Searching for 7a:59:49:d0:f8:66 in /var/db/dhcpd_leases ...
	I1204 15:34:48.570536   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:34:48.570551   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f4f2}
	I1204 15:34:48.570574   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:34:48.570588   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:34:48.570605   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:34:48.570615   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found match: 7a:59:49:d0:f8:66
	I1204 15:34:48.570625   20196 main.go:141] libmachine: (ha-098000-m04) DBG | IP: 192.169.0.8
	I1204 15:34:48.570635   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetConfigRaw
	I1204 15:34:48.571737   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:48.571957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.572535   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:34:48.572555   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.572720   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:48.572824   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:48.572944   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573100   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573236   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:48.573428   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:48.573574   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:48.573582   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:34:48.578618   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:34:48.587514   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:34:48.588773   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:48.588818   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:48.588867   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:48.588887   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.021227   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:34:49.021251   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:34:49.136078   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:49.136099   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:49.136106   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:49.136115   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.136921   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:34:49.136930   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:34:54.890690   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:34:54.890729   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:34:54.890737   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:34:54.916069   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:34:59.632189   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:34:59.632205   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632363   20196 buildroot.go:166] provisioning hostname "ha-098000-m04"
	I1204 15:34:59.632375   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632472   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.632554   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.632630   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632721   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632816   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.633517   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.633682   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.633692   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m04 && echo "ha-098000-m04" | sudo tee /etc/hostname
	I1204 15:34:59.697622   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m04
	
	I1204 15:34:59.697639   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.697775   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.697886   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.697981   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.698057   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.698172   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.698298   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.698309   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:34:59.757369   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:59.757388   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:34:59.757401   20196 buildroot.go:174] setting up certificates
	I1204 15:34:59.757413   20196 provision.go:84] configureAuth start
	I1204 15:34:59.757421   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.757593   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:59.757706   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.757790   20196 provision.go:143] copyHostCerts
	I1204 15:34:59.757821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.757873   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:34:59.757878   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.758004   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:34:59.758235   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758271   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:34:59.758277   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758377   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:34:59.758555   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758595   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:34:59.758601   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758673   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:34:59.758840   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m04 san=[127.0.0.1 192.169.0.8 ha-098000-m04 localhost minikube]
	I1204 15:35:00.089781   20196 provision.go:177] copyRemoteCerts
	I1204 15:35:00.090065   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:35:00.090090   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.090250   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.090364   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.090440   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.090527   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:00.124202   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:35:00.124273   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:35:00.161213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:35:00.161289   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:35:00.180684   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:35:00.180757   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:35:00.200255   20196 provision.go:87] duration metric: took 442.820652ms to configureAuth
	I1204 15:35:00.200272   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:35:00.201095   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:35:00.201110   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:00.201255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.201346   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.201433   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201525   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201613   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.201739   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.201862   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.201869   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:35:00.254941   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:35:00.254954   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:35:00.255043   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:35:00.255055   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.255192   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.255284   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255363   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255444   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.255591   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.255723   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.255769   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:35:00.320168   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:35:00.320186   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.320331   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.320425   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320520   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320607   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.320759   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.320905   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.320920   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:35:01.894648   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:35:01.894665   20196 machine.go:96] duration metric: took 13.321743335s to provisionDockerMachine
	I1204 15:35:01.894674   20196 start.go:293] postStartSetup for "ha-098000-m04" (driver="hyperkit")
	I1204 15:35:01.894686   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:35:01.894699   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.894901   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:35:01.894920   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.895018   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.895119   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.895219   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.895309   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.930531   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:35:01.933734   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:35:01.933745   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:35:01.933830   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:35:01.934221   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:35:01.934229   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:35:01.934400   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:35:01.942635   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:35:01.962080   20196 start.go:296] duration metric: took 67.394691ms for postStartSetup
	I1204 15:35:01.962104   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.962295   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:35:01.962307   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.962392   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.962474   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.962566   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.962648   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.996347   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:35:01.996427   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:35:02.030032   20196 fix.go:56] duration metric: took 13.619496662s for fixHost
	I1204 15:35:02.030058   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.030197   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.030296   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030393   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030479   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.030637   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:02.030806   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:02.030817   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:35:02.085147   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355302.120673328
	
	I1204 15:35:02.085159   20196 fix.go:216] guest clock: 1733355302.120673328
	I1204 15:35:02.085164   20196 fix.go:229] Guest: 2024-12-04 15:35:02.120673328 -0800 PST Remote: 2024-12-04 15:35:02.030047 -0800 PST m=+128.947170547 (delta=90.626328ms)
	I1204 15:35:02.085182   20196 fix.go:200] guest clock delta is within tolerance: 90.626328ms
	I1204 15:35:02.085188   20196 start.go:83] releasing machines lock for "ha-098000-m04", held for 13.674670433s
	I1204 15:35:02.085206   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.085349   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:35:02.123833   20196 out.go:177] * Found network options:
	I1204 15:35:02.144638   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W1204 15:35:02.165506   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165534   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165554   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.165573   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166172   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166326   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:35:02.166492   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166507   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166517   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.166609   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:35:02.166623   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.166758   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.166911   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167036   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167085   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:35:02.167112   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.167158   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:02.167255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.167389   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167508   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167638   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	W1204 15:35:02.202034   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:35:02.202111   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:35:02.250167   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:35:02.250181   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.250263   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.264522   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:35:02.273699   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:35:02.283110   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.283199   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:35:02.292318   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.301397   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:35:02.310459   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.319592   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:35:02.328805   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:35:02.338084   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:35:02.347336   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:35:02.356538   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:35:02.364640   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:35:02.364708   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:35:02.374467   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:35:02.382987   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.482753   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:35:02.500374   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.500464   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:35:02.521212   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.537841   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:35:02.556887   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.568330   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.579634   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:35:02.599962   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.611341   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.627983   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:35:02.630940   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:35:02.638934   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:35:02.652587   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:35:02.752578   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:35:02.855546   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.855575   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:35:02.869623   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.966924   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:36:03.915497   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.946841873s)
	I1204 15:36:03.916405   20196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1204 15:36:03.950956   20196 out.go:201] 
	W1204 15:36:03.971878   20196 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:35:00 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640232708Z" level=info msg="Starting up"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640913001Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.641520029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.659694182Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677007859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677106781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677181167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677217787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677508761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677564998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677718553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677761182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677794548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677829672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677979478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.678361377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.679991465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680045979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680192561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680239332Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680562445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680612744Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684019168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684126285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684179264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684280902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684315598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684384845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684662040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684780718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684823731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684856490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684888664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684919549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684954923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684987161Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685018887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685064260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685101516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685133834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685178048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685213190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685243893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685277956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685310825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685342262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685371807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685438293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685477655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685510785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685541139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685570835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685612124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685654983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685694239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685725951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685757256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685828769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685873022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686013280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686053930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686084541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686114731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686150092Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686396292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686486749Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686550930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686589142Z" level=info msg="containerd successfully booted in 0.028291s"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.663269012Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.685002759Z" level=info msg="Loading containers: start."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.779781751Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.847897599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.892577077Z" level=info msg="Loading containers: done."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902420090Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902480737Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902498001Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902856617Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925683807Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925904543Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:35:01 ha-098000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.029030705Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.030916905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031062918Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031129826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031209544Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:35:03 ha-098000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 dockerd[1154]: time="2024-12-04T23:35:04.084800926Z" level=info msg="Starting up"
	Dec 04 23:36:04 ha-098000-m04 dockerd[1154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:35:00 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640232708Z" level=info msg="Starting up"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640913001Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.641520029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.659694182Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677007859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677106781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677181167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677217787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677508761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677564998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677718553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677761182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677794548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677829672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677979478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.678361377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.679991465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680045979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680192561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680239332Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680562445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680612744Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684019168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684126285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684179264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684280902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684315598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684384845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684662040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684780718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684823731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684856490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684888664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684919549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684954923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684987161Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685018887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685064260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685101516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685133834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685178048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685213190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685243893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685277956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685310825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685342262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685371807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685438293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685477655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685510785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685541139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685570835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685612124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685654983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685694239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685725951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685757256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685828769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685873022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686013280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686053930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686084541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686114731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686150092Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686396292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686486749Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686550930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686589142Z" level=info msg="containerd successfully booted in 0.028291s"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.663269012Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.685002759Z" level=info msg="Loading containers: start."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.779781751Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.847897599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.892577077Z" level=info msg="Loading containers: done."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902420090Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902480737Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902498001Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902856617Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925683807Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925904543Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:35:01 ha-098000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.029030705Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.030916905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031062918Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031129826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031209544Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:35:03 ha-098000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 dockerd[1154]: time="2024-12-04T23:35:04.084800926Z" level=info msg="Starting up"
	Dec 04 23:36:04 ha-098000-m04 dockerd[1154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1204 15:36:03.971971   20196 out.go:270] * 
	* 
	W1204 15:36:03.973111   20196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:04.052589   20196 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-098000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-098000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-098000 -n ha-098000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 logs -n 25: (3.271434146s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m02:/home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m04 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp testdata/cp-test.txt                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000:/home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000 sudo cat                                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m02:/home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03:/home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m03 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-098000 node stop m02 -v=7                                                                                                 | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-098000 node start m02 -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000 -v=7                                                                                                       | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-098000 -v=7                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-098000 --wait=true -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:32:53
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:32:53.124576   20196 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:53.124878   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.124886   20196 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:53.124892   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.125142   20196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:32:53.126967   20196 out.go:352] Setting JSON to false
	I1204 15:32:53.159313   20196 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5543,"bootTime":1733349630,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:32:53.159464   20196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:53.181549   20196 out.go:177] * [ha-098000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:32:53.224271   20196 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:53.224311   20196 notify.go:220] Checking for updates...
	I1204 15:32:53.267840   20196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:32:53.289126   20196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:32:53.310338   20196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:53.331010   20196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:32:53.352255   20196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:53.373929   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:53.374098   20196 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:53.374835   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.374907   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.386958   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58600
	I1204 15:32:53.387294   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.387686   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.387699   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.387905   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.388016   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.418809   20196 out.go:177] * Using the hyperkit driver based on existing profile
	I1204 15:32:53.461003   20196 start.go:297] selected driver: hyperkit
	I1204 15:32:53.461036   20196 start.go:901] validating driver "hyperkit" against &{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.461290   20196 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:53.461477   20196 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.461727   20196 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:32:53.473875   20196 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:32:53.481311   20196 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.481337   20196 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:32:53.486904   20196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:53.486942   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:32:53.486987   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:32:53.487059   20196 start.go:340] cluster config:
	{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.487162   20196 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.508071   20196 out.go:177] * Starting "ha-098000" primary control-plane node in "ha-098000" cluster
	I1204 15:32:53.529205   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:32:53.529292   20196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 15:32:53.529312   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:32:53.529537   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:32:53.529555   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:32:53.529727   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.530635   20196 start.go:360] acquireMachinesLock for ha-098000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:53.530735   20196 start.go:364] duration metric: took 76.824µs to acquireMachinesLock for "ha-098000"
	I1204 15:32:53.530765   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:32:53.530784   20196 fix.go:54] fixHost starting: 
	I1204 15:32:53.531293   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.531320   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.542703   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58602
	I1204 15:32:53.543046   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.543457   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.543473   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.543695   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.543798   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.543917   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:32:53.544005   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.544085   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 19294
	I1204 15:32:53.545215   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.545258   20196 fix.go:112] recreateIfNeeded on ha-098000: state=Stopped err=<nil>
	I1204 15:32:53.545275   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	W1204 15:32:53.545373   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:32:53.586803   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000" ...
	I1204 15:32:53.608028   20196 main.go:141] libmachine: (ha-098000) Calling .Start
	I1204 15:32:53.608287   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.608354   20196 main.go:141] libmachine: (ha-098000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid
	I1204 15:32:53.610773   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.610786   20196 main.go:141] libmachine: (ha-098000) DBG | pid 19294 is in state "Stopped"
	I1204 15:32:53.610801   20196 main.go:141] libmachine: (ha-098000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid...
	I1204 15:32:53.611292   20196 main.go:141] libmachine: (ha-098000) DBG | Using UUID 70106e4e-8082-4c46-9279-8221d5ed18af
	I1204 15:32:53.728648   20196 main.go:141] libmachine: (ha-098000) DBG | Generated MAC 46:3b:47:9c:31:41
	I1204 15:32:53.728673   20196 main.go:141] libmachine: (ha-098000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:32:53.728953   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.728996   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.729068   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70106e4e-8082-4c46-9279-8221d5ed18af", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:32:53.729113   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70106e4e-8082-4c46-9279-8221d5ed18af -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:32:53.729129   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:32:53.730591   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Pid is 20209
	I1204 15:32:53.731014   20196 main.go:141] libmachine: (ha-098000) DBG | Attempt 0
	I1204 15:32:53.731028   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.731114   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:32:53.732978   20196 main.go:141] libmachine: (ha-098000) DBG | Searching for 46:3b:47:9c:31:41 in /var/db/dhcpd_leases ...
	I1204 15:32:53.733030   20196 main.go:141] libmachine: (ha-098000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:32:53.733053   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:32:53.733076   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:32:53.733086   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:32:53.733096   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f369}
	I1204 15:32:53.733112   20196 main.go:141] libmachine: (ha-098000) DBG | Found match: 46:3b:47:9c:31:41
	I1204 15:32:53.733119   20196 main.go:141] libmachine: (ha-098000) DBG | IP: 192.169.0.5
	I1204 15:32:53.733163   20196 main.go:141] libmachine: (ha-098000) Calling .GetConfigRaw
	I1204 15:32:53.733987   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:32:53.734258   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.734730   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:32:53.734741   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.734939   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:32:53.735075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:32:53.735212   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735339   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735471   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:32:53.735700   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:32:53.735888   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:32:53.735897   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:32:53.741792   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:32:53.798085   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:32:53.799084   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:53.799132   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:53.799147   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:53.799159   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.212915   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:32:54.212930   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:32:54.327517   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:54.327538   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:54.327567   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:54.327585   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.328504   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:32:54.328518   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:00.053293   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:00.053310   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:00.053327   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:00.080441   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:04.805929   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:04.805956   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806123   20196 buildroot.go:166] provisioning hostname "ha-098000"
	I1204 15:33:04.806135   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806234   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.806337   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.806431   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806539   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806630   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.806774   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.806928   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.806937   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000 && echo "ha-098000" | sudo tee /etc/hostname
	I1204 15:33:04.881527   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000
	
	I1204 15:33:04.881546   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.881688   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.881782   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881867   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881972   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.882116   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.882259   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.882270   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:04.951908   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:04.951928   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:04.951941   20196 buildroot.go:174] setting up certificates
	I1204 15:33:04.951947   20196 provision.go:84] configureAuth start
	I1204 15:33:04.951953   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.952087   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:04.952194   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.952301   20196 provision.go:143] copyHostCerts
	I1204 15:33:04.952333   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952388   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:04.952396   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952514   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:04.952739   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952770   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:04.952775   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952846   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:04.953021   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953050   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:04.953054   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953117   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:04.953299   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000 san=[127.0.0.1 192.169.0.5 ha-098000 localhost minikube]
	I1204 15:33:05.029495   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:05.029569   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:05.029587   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.029725   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.029828   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.029935   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.030021   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:05.069556   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:05.069632   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:05.088502   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:05.088560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 15:33:05.107211   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:05.107270   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:05.127045   20196 provision.go:87] duration metric: took 175.080758ms to configureAuth
	I1204 15:33:05.127060   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:05.127241   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:05.127255   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:05.127390   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.127495   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.127590   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127810   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.127983   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.128112   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.128119   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:05.194828   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:05.194840   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:05.194934   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:05.194945   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.195075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.195184   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195275   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195365   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.195540   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.195677   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.195720   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:05.269411   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:05.269434   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.269574   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.269679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269784   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269878   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.270029   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.270180   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.270192   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:06.947784   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:06.947801   20196 machine.go:96] duration metric: took 13.212685267s to provisionDockerMachine
	I1204 15:33:06.947813   20196 start.go:293] postStartSetup for "ha-098000" (driver="hyperkit")
	I1204 15:33:06.947820   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:06.947830   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:06.948036   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:06.948057   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:06.948150   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:06.948258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:06.948370   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:06.948484   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:06.990689   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:06.994074   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:06.994089   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:06.994206   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:06.994349   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:06.994356   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:06.994521   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:07.005479   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:07.040997   20196 start.go:296] duration metric: took 93.160395ms for postStartSetup
	I1204 15:33:07.041019   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.041214   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:07.041227   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.041320   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.041401   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.041488   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.041577   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.079449   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:07.079522   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:07.131796   20196 fix.go:56] duration metric: took 13.600616251s for fixHost
	I1204 15:33:07.131819   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.131964   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.132056   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132147   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.132400   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:07.132541   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:07.132548   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:07.198066   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355187.085615924
	
	I1204 15:33:07.198080   20196 fix.go:216] guest clock: 1733355187.085615924
	I1204 15:33:07.198085   20196 fix.go:229] Guest: 2024-12-04 15:33:07.085615924 -0800 PST Remote: 2024-12-04 15:33:07.131808 -0800 PST m=+14.052161483 (delta=-46.192076ms)
	I1204 15:33:07.198107   20196 fix.go:200] guest clock delta is within tolerance: -46.192076ms
	I1204 15:33:07.198113   20196 start.go:83] releasing machines lock for "ha-098000", held for 13.666979222s
	I1204 15:33:07.198132   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198272   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:07.198375   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198673   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198785   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198878   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:07.198921   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.198947   20196 ssh_runner.go:195] Run: cat /version.json
	I1204 15:33:07.198968   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.199026   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199093   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199123   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199209   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199228   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199298   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.199315   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199396   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.233868   20196 ssh_runner.go:195] Run: systemctl --version
	I1204 15:33:07.278985   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:33:07.283423   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:07.283478   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:07.298510   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:07.298524   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.298651   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.315201   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:07.324137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:07.332963   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.333027   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:07.341883   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.350757   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:07.359678   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.368612   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:07.377607   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:07.386447   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:07.395124   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:07.404070   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:07.412097   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:07.412157   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:07.421208   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:07.429418   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.524346   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:07.542570   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.542668   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:07.559288   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.569950   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:07.583434   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.593916   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.603881   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:07.624337   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.634820   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.649640   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:07.652619   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:07.659817   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:07.673288   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:07.772876   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:07.878665   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.878744   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:07.892585   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.986161   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:10.248338   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.262094537s)
	I1204 15:33:10.248412   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:10.259004   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:10.272350   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.282710   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:10.373201   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:10.481588   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.590503   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:10.604294   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.614461   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.704083   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:10.769517   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:10.769615   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:10.774192   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:10.774266   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:10.777449   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:10.800815   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:10.800899   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.817205   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.856841   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:10.856890   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:10.857354   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:10.862069   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:10.871775   20196 kubeadm.go:883] updating cluster {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 15:33:10.871875   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:10.871949   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.885784   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.885796   20196 docker.go:619] Images already preloaded, skipping extraction
	I1204 15:33:10.885882   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.904423   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.904444   20196 cache_images.go:84] Images are preloaded, skipping loading
	I1204 15:33:10.904450   20196 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1204 15:33:10.904531   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:10.904612   20196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:33:10.937949   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:33:10.937963   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:33:10.937974   20196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:33:10.938009   20196 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-098000 NodeName:ha-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:33:10.938085   20196 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-098000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:33:10.938101   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:10.938174   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:10.950599   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:10.950678   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:10.950747   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:10.959008   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:10.959066   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 15:33:10.966355   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1204 15:33:10.979785   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:10.993124   20196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1204 15:33:11.007280   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:11.020699   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:11.023569   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:11.032639   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:11.133629   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:11.148832   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.5
	I1204 15:33:11.148845   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:11.148855   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.149029   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:11.149085   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:11.149095   20196 certs.go:256] generating profile certs ...
	I1204 15:33:11.149184   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:11.149204   20196 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330
	I1204 15:33:11.149219   20196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1204 15:33:11.369000   20196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 ...
	I1204 15:33:11.369023   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330: {Name:mkee72feeeccd665b141717d3a28fdfb2c7bde31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369371   20196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 ...
	I1204 15:33:11.369381   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330: {Name:mk73951855cf52179c105169e788f46cc4d39a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369660   20196 certs.go:381] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt
	I1204 15:33:11.369853   20196 certs.go:385] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key
	I1204 15:33:11.370068   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:11.370078   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:11.370100   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:11.370120   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:11.370139   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:11.370157   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:11.370176   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:11.370196   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:11.370213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:11.370295   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:11.370331   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:11.370340   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:11.370387   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:11.370418   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:11.370453   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:11.370519   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:11.370552   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.370573   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.370591   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.371058   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:11.399000   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:11.441701   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:11.476788   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:11.508692   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:11.528963   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:11.548308   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:11.567414   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:11.586589   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:11.605437   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:11.624356   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:11.643314   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:33:11.656890   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:11.661063   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:11.670050   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673329   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673378   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.677431   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:11.686327   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:11.695205   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698569   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698616   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.702683   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:11.711573   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:11.720441   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723730   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723772   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.727893   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:11.736772   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:11.740128   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:11.744800   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:11.749129   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:11.753890   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:11.758287   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:11.762608   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:11.766918   20196 kubeadm.go:392] StartCluster: {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:11.767041   20196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:33:11.779240   20196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:33:11.787479   20196 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:33:11.787491   20196 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:33:11.787539   20196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:33:11.796840   20196 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:33:11.797140   20196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-098000" does not appear in /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.797223   20196 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-17258/kubeconfig needs updating (will repair): [kubeconfig missing "ha-098000" cluster setting kubeconfig missing "ha-098000" context setting]
	I1204 15:33:11.797420   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.797819   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.798024   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:33:11.798341   20196 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 15:33:11.798533   20196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:33:11.806274   20196 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1204 15:33:11.806292   20196 kubeadm.go:597] duration metric: took 18.792967ms to restartPrimaryControlPlane
	I1204 15:33:11.806299   20196 kubeadm.go:394] duration metric: took 39.384435ms to StartCluster
	I1204 15:33:11.806313   20196 settings.go:142] acquiring lock: {Name:mk99ad63e4feda725ee10448138b299c26bf8cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.806400   20196 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.806790   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.807009   20196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:11.807022   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:11.807035   20196 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:33:11.807145   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.850133   20196 out.go:177] * Enabled addons: 
	I1204 15:33:11.871157   20196 addons.go:510] duration metric: took 64.116535ms for enable addons: enabled=[]
	I1204 15:33:11.871244   20196 start.go:246] waiting for cluster config update ...
	I1204 15:33:11.871256   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:11.894284   20196 out.go:201] 
	I1204 15:33:11.915277   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.915378   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.939339   20196 out.go:177] * Starting "ha-098000-m02" control-plane node in "ha-098000" cluster
	I1204 15:33:11.981186   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:11.981222   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:11.981421   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:11.981442   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:11.981558   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.982398   20196 start.go:360] acquireMachinesLock for ha-098000-m02: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:11.982475   20196 start.go:364] duration metric: took 58.776µs to acquireMachinesLock for "ha-098000-m02"
	I1204 15:33:11.982495   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:11.982501   20196 fix.go:54] fixHost starting: m02
	I1204 15:33:11.982818   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:11.982845   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:11.994288   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58624
	I1204 15:33:11.994640   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:11.995007   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:11.995021   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:11.995253   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:11.995373   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:11.995490   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:33:11.995578   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:11.995648   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20139
	I1204 15:33:11.996810   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:11.996835   20196 fix.go:112] recreateIfNeeded on ha-098000-m02: state=Stopped err=<nil>
	I1204 15:33:11.996847   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	W1204 15:33:11.996942   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:12.039213   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m02" ...
	I1204 15:33:12.060086   20196 main.go:141] libmachine: (ha-098000-m02) Calling .Start
	I1204 15:33:12.060346   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.060380   20196 main.go:141] libmachine: (ha-098000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid
	I1204 15:33:12.061608   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:12.061617   20196 main.go:141] libmachine: (ha-098000-m02) DBG | pid 20139 is in state "Stopped"
	I1204 15:33:12.061626   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid...
	I1204 15:33:12.061806   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Using UUID 2486faac-afab-449a-8055-5ee234f7d16f
	I1204 15:33:12.086653   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Generated MAC b2:39:f5:23:0b:32
	I1204 15:33:12.086676   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:12.086820   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086851   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086887   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2486faac-afab-449a-8055-5ee234f7d16f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:12.086920   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2486faac-afab-449a-8055-5ee234f7d16f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:12.086929   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:12.088450   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Pid is 20220
	I1204 15:33:12.088937   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Attempt 0
	I1204 15:33:12.088953   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.089027   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:33:12.090875   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Searching for b2:39:f5:23:0b:32 in /var/db/dhcpd_leases ...
	I1204 15:33:12.090963   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:12.090982   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:12.091003   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:12.091026   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:33:12.091037   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found match: b2:39:f5:23:0b:32
	I1204 15:33:12.091047   20196 main.go:141] libmachine: (ha-098000-m02) DBG | IP: 192.169.0.6
	I1204 15:33:12.091078   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetConfigRaw
	I1204 15:33:12.091745   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:12.091957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:12.092493   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:12.092503   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:12.092649   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:12.092776   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:12.092901   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093004   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093096   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:12.093267   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:12.093463   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:12.093473   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:12.099465   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:12.108663   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:12.109633   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.109661   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.109674   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.109689   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.508437   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:12.508452   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:12.623247   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.623267   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.623283   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.623289   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.624086   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:12.624095   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:18.362951   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 15:33:18.362990   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 15:33:18.362997   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 15:33:18.387781   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 15:33:23.149238   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:23.149254   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149403   20196 buildroot.go:166] provisioning hostname "ha-098000-m02"
	I1204 15:33:23.149415   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149509   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.149612   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.149697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149796   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149882   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.150012   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.150165   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.150173   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m02 && echo "ha-098000-m02" | sudo tee /etc/hostname
	I1204 15:33:23.207677   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m02
	
	I1204 15:33:23.207693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.207831   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.207942   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208053   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208156   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.208340   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.208503   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.208515   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:23.265398   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:23.265414   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:23.265426   20196 buildroot.go:174] setting up certificates
	I1204 15:33:23.265434   20196 provision.go:84] configureAuth start
	I1204 15:33:23.265443   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.265604   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:23.265696   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.265792   20196 provision.go:143] copyHostCerts
	I1204 15:33:23.265821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.265868   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:23.265874   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.266044   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:23.266308   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266347   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:23.266352   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266606   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:23.266780   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266810   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:23.266815   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266891   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:23.267067   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m02 san=[127.0.0.1 192.169.0.6 ha-098000-m02 localhost minikube]
	I1204 15:33:23.418588   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:23.418649   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:23.418663   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.418794   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.418895   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.418994   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.419094   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:23.449777   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:23.449845   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:23.469736   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:23.469808   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:23.489512   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:23.489573   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:23.509353   20196 provision.go:87] duration metric: took 243.902721ms to configureAuth
	I1204 15:33:23.509367   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:23.509536   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:23.509550   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:23.509693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.509787   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.509886   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.509981   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.510059   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.510190   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.510321   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.510328   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:23.557917   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:23.557929   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:23.558018   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:23.558034   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.558154   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.558255   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558337   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558428   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.558600   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.558722   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.558764   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:23.619577   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:23.619599   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.619741   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.619853   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.619941   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.620042   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.620196   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.620336   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.620348   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:25.265062   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:25.265078   20196 machine.go:96] duration metric: took 13.172205227s to provisionDockerMachine
	I1204 15:33:25.265092   20196 start.go:293] postStartSetup for "ha-098000-m02" (driver="hyperkit")
	I1204 15:33:25.265099   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:25.265111   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.265311   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:25.265332   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.265441   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.265529   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.265633   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.265739   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.304266   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:25.311180   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:25.311193   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:25.311283   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:25.311424   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:25.311431   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:25.311607   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:25.324859   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:25.357942   20196 start.go:296] duration metric: took 92.839826ms for postStartSetup
	I1204 15:33:25.357966   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.358160   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:25.358173   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.358261   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.358352   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.358436   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.358521   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.389685   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:25.389754   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:25.422337   20196 fix.go:56] duration metric: took 13.439453986s for fixHost
	I1204 15:33:25.422364   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.422533   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.422647   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422735   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422815   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.422958   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:25.423099   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:25.423107   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:25.472632   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355205.621764225
	
	I1204 15:33:25.472647   20196 fix.go:216] guest clock: 1733355205.621764225
	I1204 15:33:25.472652   20196 fix.go:229] Guest: 2024-12-04 15:33:25.621764225 -0800 PST Remote: 2024-12-04 15:33:25.422353 -0800 PST m=+32.342189685 (delta=199.411225ms)
	I1204 15:33:25.472663   20196 fix.go:200] guest clock delta is within tolerance: 199.411225ms
	I1204 15:33:25.472667   20196 start.go:83] releasing machines lock for "ha-098000-m02", held for 13.489803052s
	I1204 15:33:25.472697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.472837   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:25.496277   20196 out.go:177] * Found network options:
	I1204 15:33:25.537194   20196 out.go:177]   - NO_PROXY=192.169.0.5
	W1204 15:33:25.558335   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.558422   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559432   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559728   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559899   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:25.559950   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	W1204 15:33:25.560026   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.560173   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:25.560212   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.560218   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560413   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560435   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560588   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560653   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560755   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560803   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.560929   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	W1204 15:33:25.589676   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:25.589750   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:25.635633   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:25.635654   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.635765   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.651707   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:25.660095   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:25.668588   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:25.668650   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:25.676830   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.685079   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:25.693509   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.701733   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:25.710137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:25.718450   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:25.726929   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:25.735114   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:25.742569   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:25.742622   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:25.751585   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:25.759751   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:25.851537   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:25.870178   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.870261   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:25.886777   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.898631   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:25.915954   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.927090   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.937345   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:25.958314   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.968609   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.983636   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:25.986491   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:25.993508   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:26.006712   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:26.100912   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:26.190828   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:26.190859   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:26.204976   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:26.305524   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:28.666691   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.361082583s)
	I1204 15:33:28.666774   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:28.677849   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:28.691293   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:28.702315   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:28.804235   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:28.895456   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.008598   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:29.022244   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:29.033285   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.123647   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:29.194113   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:29.194213   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:29.198266   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:29.198329   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:29.201217   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:29.226480   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:29.226574   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.245410   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.286251   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:29.327924   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:33:29.348859   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:29.349296   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:29.353761   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.363356   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:33:29.363524   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:29.363748   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.363768   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.374807   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58646
	I1204 15:33:29.375120   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.375473   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.375491   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.375697   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.375799   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:33:29.375885   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:29.375946   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:33:29.377121   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:33:29.377369   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.377393   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.388419   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58648
	I1204 15:33:29.388721   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.389015   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.389049   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.389281   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.389378   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:29.389495   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.6
	I1204 15:33:29.389501   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:29.389513   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:29.389656   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:29.389710   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:29.389719   20196 certs.go:256] generating profile certs ...
	I1204 15:33:29.389811   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:29.389878   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.3ecf7e1a
	I1204 15:33:29.389931   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:29.389938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:29.389964   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:29.389985   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:29.390009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:29.390029   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:29.390048   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:29.390067   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:29.390086   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:29.390163   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:29.390207   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:29.390215   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:29.390250   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:29.390285   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:29.390316   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:29.390382   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:29.390418   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.390439   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.390458   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.390483   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:29.390568   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:29.390658   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:29.390751   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:29.390833   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:29.422140   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:33:29.425696   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:33:29.434269   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:33:29.437377   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:33:29.446042   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:33:29.449183   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:33:29.457490   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:33:29.460647   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:33:29.469352   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:33:29.472755   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:33:29.481093   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:33:29.484099   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:33:29.492651   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:29.513068   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:29.533396   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:29.553633   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:29.573360   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:29.592833   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:29.612325   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:29.631705   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:29.651772   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:29.671647   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:29.691028   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:29.710680   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:33:29.724088   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:33:29.738048   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:33:29.751781   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:33:29.765280   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:33:29.779127   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:33:29.792641   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:33:29.806335   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:29.810643   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:29.819095   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822486   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822534   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.826729   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:29.835308   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:29.843890   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847451   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847503   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.851708   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:29.859922   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:29.868147   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871612   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871654   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.875808   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:29.884074   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:29.887539   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:29.891899   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:29.896170   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:29.900557   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:29.904814   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:29.909235   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:29.913504   20196 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1204 15:33:29.913564   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:29.913578   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:29.913625   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:29.926130   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:29.926164   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:29.926229   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:29.933952   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:29.934013   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:33:29.941532   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:33:29.955276   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:29.968570   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:29.982327   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:29.985248   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.994738   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.085095   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.100297   20196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:30.100505   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:30.121980   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:33:30.163546   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.296003   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.317056   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:30.317267   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:33:30.317312   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:33:30.317488   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:30.317571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:30.317576   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:30.317583   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:30.317592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.429719   20196 round_trippers.go:574] Response Status: 200 OK in 8111 milliseconds
	I1204 15:33:38.437420   20196 node_ready.go:49] node "ha-098000-m02" has status "Ready":"True"
	I1204 15:33:38.437441   20196 node_ready.go:38] duration metric: took 8.119707596s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:38.437450   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:38.437502   20196 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:33:38.437515   20196 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:33:38.437571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:38.437578   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.437593   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.437599   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.455661   20196 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 15:33:38.464148   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.464210   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:33:38.464215   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.464221   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.464224   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.470699   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.471292   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.471302   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.471308   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.471312   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.481534   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:33:38.481959   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.481970   20196 pod_ready.go:82] duration metric: took 17.803771ms for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.481977   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.482020   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:33:38.482026   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.482032   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.482035   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.487605   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.488267   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.488322   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.488329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.488343   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.490575   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.491180   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.491192   20196 pod_ready.go:82] duration metric: took 9.208421ms for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491202   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491280   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:33:38.491287   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.491293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.491297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.494530   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:38.495165   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.495173   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.495180   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.495184   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.499549   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.499961   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.499972   20196 pod_ready.go:82] duration metric: took 8.763238ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.499980   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.500023   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:33:38.500028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.500034   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.500039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.506409   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.506828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:38.506837   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.506843   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.506846   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.511940   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.512316   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.512327   20196 pod_ready.go:82] duration metric: took 12.340986ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512334   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512373   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:33:38.512378   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.512384   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.512389   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.516730   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.638087   20196 request.go:632] Waited for 120.794515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638124   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638130   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.638161   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.638169   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.640203   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.640614   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.640625   20196 pod_ready.go:82] duration metric: took 128.282ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.640638   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.838617   20196 request.go:632] Waited for 197.931176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838688   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838697   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.838706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.838712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.840867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.037679   20196 request.go:632] Waited for 196.178205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037714   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037719   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.037772   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.037777   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.042421   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:39.042726   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.042736   20196 pod_ready.go:82] duration metric: took 402.080499ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.042743   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.237786   20196 request.go:632] Waited for 195.001118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237820   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237825   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.237830   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.237835   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.243495   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:39.437668   20196 request.go:632] Waited for 193.740455ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437706   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.437712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.437719   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.440123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.440472   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.440482   20196 pod_ready.go:82] duration metric: took 397.72282ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.440490   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.638172   20196 request.go:632] Waited for 197.630035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638227   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638235   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.638277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.638301   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.641465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.837863   20196 request.go:632] Waited for 195.844278ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837914   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837923   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.838008   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.838017   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.841077   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.841414   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.841423   20196 pod_ready.go:82] duration metric: took 400.91619ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.841431   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.037805   20196 request.go:632] Waited for 196.32052ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037845   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.037851   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.037857   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.040255   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.238963   20196 request.go:632] Waited for 198.140778ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.239040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.239045   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.242092   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:40.242401   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.242411   20196 pod_ready.go:82] duration metric: took 400.963216ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.242419   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.438693   20196 request.go:632] Waited for 196.229899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438729   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438735   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.438741   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.438745   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.441139   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.637709   20196 request.go:632] Waited for 196.13524ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637752   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637777   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.637783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.637787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.640278   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.640704   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.640714   20196 pod_ready.go:82] duration metric: took 398.278068ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.640722   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.838825   20196 request.go:632] Waited for 198.055929ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838901   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838908   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.838927   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.838932   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.841541   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.037964   20196 request.go:632] Waited for 195.880635ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038037   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038043   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.038049   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.038054   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.041754   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.042231   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.042241   20196 pod_ready.go:82] duration metric: took 401.502224ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.042248   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.237873   20196 request.go:632] Waited for 195.582123ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237946   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237952   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.237957   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.237961   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.240730   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.438126   20196 request.go:632] Waited for 196.947205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438157   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438167   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.438207   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.438212   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.440777   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.441074   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.441084   20196 pod_ready.go:82] duration metric: took 398.818652ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.441091   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.639164   20196 request.go:632] Waited for 198.003801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639309   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639320   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.639331   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.639338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.643045   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.838863   20196 request.go:632] Waited for 195.192063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838912   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838924   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.838946   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.838954   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.842314   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.842750   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.842763   20196 pod_ready.go:82] duration metric: took 401.652541ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.842771   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.039281   20196 request.go:632] Waited for 196.459472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039417   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039428   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.039439   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.039447   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.042816   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.238811   20196 request.go:632] Waited for 195.378249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238885   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238891   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.238898   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.238903   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.240764   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.241072   20196 pod_ready.go:93] pod "kube-proxy-mz4q2" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.241084   20196 pod_ready.go:82] duration metric: took 398.294263ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.241092   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.438843   20196 request.go:632] Waited for 197.705446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438898   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.438905   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.438908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.440868   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.638818   20196 request.go:632] Waited for 197.361352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638884   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638895   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.638906   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.638914   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.642158   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.642556   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.642569   20196 pod_ready.go:82] duration metric: took 401.459636ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.642580   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.839526   20196 request.go:632] Waited for 196.890487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839713   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.839724   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.839732   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.843198   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.037789   20196 request.go:632] Waited for 194.105591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037944   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037961   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.037975   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.037982   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.041343   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.041920   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.041933   20196 pod_ready.go:82] duration metric: took 399.3347ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.041942   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.239892   20196 request.go:632] Waited for 197.874831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239961   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239969   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.239983   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.239991   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.243085   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.438099   20196 request.go:632] Waited for 194.176391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438141   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438168   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.438176   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.438185   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.440115   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:43.440578   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.440586   20196 pod_ready.go:82] duration metric: took 398.625667ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.440601   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.639811   20196 request.go:632] Waited for 199.133254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639908   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639919   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.639930   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.639940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.643164   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.839903   20196 request.go:632] Waited for 196.135821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839967   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839976   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.839987   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.839994   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.843566   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.844161   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.844175   20196 pod_ready.go:82] duration metric: took 403.555453ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.844208   20196 pod_ready.go:39] duration metric: took 5.406590624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:43.844253   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:33:43.844326   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:33:43.855983   20196 api_server.go:72] duration metric: took 13.755275558s to wait for apiserver process to appear ...
	I1204 15:33:43.855995   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:33:43.856010   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:33:43.860186   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:33:43.860225   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:33:43.860230   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.860243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.860246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.860683   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:33:43.860804   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:33:43.860815   20196 api_server.go:131] duration metric: took 4.815788ms to wait for apiserver health ...
	I1204 15:33:43.860824   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:33:44.038297   20196 request.go:632] Waited for 177.420142ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038399   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.038411   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.038421   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.044078   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.049007   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:33:44.049023   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.049029   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.049032   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.049034   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.049038   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.049041   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.049043   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.049046   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.049049   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.049051   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.049054   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.049056   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.049059   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.049069   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.049073   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.049075   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.049078   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.049080   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.049084   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.049087   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.049089   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.049092   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.049094   20196 system_pods.go:61] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.049097   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.049099   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.049102   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.049106   20196 system_pods.go:74] duration metric: took 188.271977ms to wait for pod list to return data ...
	I1204 15:33:44.049112   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:33:44.239205   20196 request.go:632] Waited for 190.005694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239263   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239272   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.239283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.239322   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.243527   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.243704   20196 default_sa.go:45] found service account: "default"
	I1204 15:33:44.243713   20196 default_sa.go:55] duration metric: took 194.591962ms for default service account to be created ...
	I1204 15:33:44.243719   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:33:44.439115   20196 request.go:632] Waited for 195.322716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439246   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.439258   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.439264   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.444755   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.449718   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:33:44.449733   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.449738   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.449741   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.449744   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.449748   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.449750   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.449753   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.449755   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.449758   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.449761   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.449765   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.449768   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.449771   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.449774   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.449777   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.449783   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.449786   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.449789   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.449793   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.449795   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.449798   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.449801   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.449804   20196 system_pods.go:89] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.449806   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.449810   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.449813   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.449818   20196 system_pods.go:126] duration metric: took 206.089298ms to wait for k8s-apps to be running ...
	I1204 15:33:44.449823   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:33:44.449890   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:33:44.461452   20196 system_svc.go:56] duration metric: took 11.623487ms WaitForService to wait for kubelet
	I1204 15:33:44.461466   20196 kubeadm.go:582] duration metric: took 14.360743481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:33:44.461484   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:33:44.639462   20196 request.go:632] Waited for 177.925125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639548   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.639560   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.639568   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.643595   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.644812   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644828   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644839   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644849   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644858   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644861   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644864   20196 node_conditions.go:105] duration metric: took 183.370218ms to run NodePressure ...
	I1204 15:33:44.644872   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:44.644890   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:44.665849   20196 out.go:201] 
	I1204 15:33:44.687912   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:44.688042   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.710522   20196 out.go:177] * Starting "ha-098000-m03" control-plane node in "ha-098000" cluster
	I1204 15:33:44.752466   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:44.752500   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:44.752679   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:44.752697   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:44.752830   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.753998   20196 start.go:360] acquireMachinesLock for ha-098000-m03: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:44.754068   20196 start.go:364] duration metric: took 52.377µs to acquireMachinesLock for "ha-098000-m03"
	I1204 15:33:44.754085   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:44.754091   20196 fix.go:54] fixHost starting: m03
	I1204 15:33:44.754406   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:44.754430   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:44.765918   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58653
	I1204 15:33:44.766304   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:44.766704   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:44.766719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:44.766938   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:44.767056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.767166   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetState
	I1204 15:33:44.767251   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.767322   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 19347
	I1204 15:33:44.768480   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.768517   20196 fix.go:112] recreateIfNeeded on ha-098000-m03: state=Stopped err=<nil>
	I1204 15:33:44.768528   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	W1204 15:33:44.768610   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:44.789653   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m03" ...
	I1204 15:33:44.831751   20196 main.go:141] libmachine: (ha-098000-m03) Calling .Start
	I1204 15:33:44.832023   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.832066   20196 main.go:141] libmachine: (ha-098000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid
	I1204 15:33:44.834593   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.834606   20196 main.go:141] libmachine: (ha-098000-m03) DBG | pid 19347 is in state "Stopped"
	I1204 15:33:44.834626   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid...
	I1204 15:33:44.835523   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Using UUID eac2e001-90c5-40d6-830d-b844e6baedeb
	I1204 15:33:44.861764   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Generated MAC 56:f8:e7:bc:e7:07
	I1204 15:33:44.861784   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:44.862005   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862041   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862100   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "eac2e001-90c5-40d6-830d-b844e6baedeb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:44.862139   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U eac2e001-90c5-40d6-830d-b844e6baedeb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:44.862604   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:44.864474   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Pid is 20231
	I1204 15:33:44.864862   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Attempt 0
	I1204 15:33:44.864878   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.864933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 20231
	I1204 15:33:44.866074   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Searching for 56:f8:e7:bc:e7:07 in /var/db/dhcpd_leases ...
	I1204 15:33:44.866145   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:44.866158   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:33:44.866167   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:44.866177   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:44.866182   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:33:44.866187   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found match: 56:f8:e7:bc:e7:07
	I1204 15:33:44.866193   20196 main.go:141] libmachine: (ha-098000-m03) DBG | IP: 192.169.0.7
	I1204 15:33:44.866266   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetConfigRaw
	I1204 15:33:44.866960   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:44.867187   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.867733   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:44.867748   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.867880   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:44.867991   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:44.868083   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868175   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868275   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:44.868449   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:44.868607   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:44.868615   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:44.875700   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:44.885221   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:44.886534   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:44.886590   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:44.886624   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:44.886641   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.310864   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:45.310888   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:45.426378   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:45.426408   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:45.426418   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:45.426427   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.427201   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:45.427213   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:51.200443   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:51.200513   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:51.200524   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:51.225933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:55.935290   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:55.935305   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935436   20196 buildroot.go:166] provisioning hostname "ha-098000-m03"
	I1204 15:33:55.935445   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935551   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:55.935640   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:55.935732   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935825   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935912   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:55.936073   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:55.936205   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:55.936213   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m03 && echo "ha-098000-m03" | sudo tee /etc/hostname
	I1204 15:33:56.008649   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m03
	
	I1204 15:33:56.008663   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.008821   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.008915   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009001   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009093   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.009247   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.009386   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.009397   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:56.076925   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:56.076941   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:56.076950   20196 buildroot.go:174] setting up certificates
	I1204 15:33:56.076956   20196 provision.go:84] configureAuth start
	I1204 15:33:56.076962   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:56.077121   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:56.077219   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.077318   20196 provision.go:143] copyHostCerts
	I1204 15:33:56.077346   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077405   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:56.077411   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077538   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:56.077740   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077775   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:56.077780   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077851   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:56.078007   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078036   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:56.078041   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078135   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:56.078295   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m03 san=[127.0.0.1 192.169.0.7 ha-098000-m03 localhost minikube]
	I1204 15:33:56.184360   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:56.184421   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:56.184436   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.184584   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.184682   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.184788   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.184878   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:56.222358   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:56.222423   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:56.242527   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:56.242598   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:56.262411   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:56.262492   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:33:56.282604   20196 provision.go:87] duration metric: took 205.634097ms to configureAuth
	I1204 15:33:56.282619   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:56.282802   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:56.282816   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:56.282954   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.283056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.283161   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283267   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283366   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.283498   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.283620   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.283628   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:56.345040   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:56.345053   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:56.345129   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:56.345143   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.345280   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.345367   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345443   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.345668   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.345805   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.345851   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:56.424345   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:56.424363   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.424517   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.424685   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424787   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424878   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.425031   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.425156   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.425173   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:58.122525   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:58.122539   20196 machine.go:96] duration metric: took 13.254423135s to provisionDockerMachine
	I1204 15:33:58.122547   20196 start.go:293] postStartSetup for "ha-098000-m03" (driver="hyperkit")
	I1204 15:33:58.122554   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:58.122566   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.122762   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:58.122783   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.122871   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.122946   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.123045   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.123137   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.161639   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:58.164739   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:58.164749   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:58.164831   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:58.164968   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:58.164974   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:58.165140   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:58.173027   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:58.192093   20196 start.go:296] duration metric: took 69.536473ms for postStartSetup
	I1204 15:33:58.192114   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.192306   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:58.192320   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.192414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.192509   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.192600   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.192674   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.230841   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:58.230926   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:58.265220   20196 fix.go:56] duration metric: took 13.510737637s for fixHost
	I1204 15:33:58.265271   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.265414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.265524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265620   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265713   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.265865   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:58.266013   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:58.266021   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:58.330663   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355238.486070391
	
	I1204 15:33:58.330676   20196 fix.go:216] guest clock: 1733355238.486070391
	I1204 15:33:58.330682   20196 fix.go:229] Guest: 2024-12-04 15:33:58.486070391 -0800 PST Remote: 2024-12-04 15:33:58.265237 -0800 PST m=+65.184150423 (delta=220.833391ms)
	I1204 15:33:58.330692   20196 fix.go:200] guest clock delta is within tolerance: 220.833391ms
	I1204 15:33:58.330696   20196 start.go:83] releasing machines lock for "ha-098000-m03", held for 13.576240131s
	I1204 15:33:58.330714   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.330854   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:58.352510   20196 out.go:177] * Found network options:
	I1204 15:33:58.380745   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1204 15:33:58.401983   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402013   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402029   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402504   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402654   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402766   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:58.402819   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	W1204 15:33:58.402881   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402902   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402977   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403000   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:58.403012   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.403174   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403214   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403349   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403358   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403564   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.403575   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403741   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	W1204 15:33:58.437750   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:58.437828   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:58.485243   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:58.485257   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.485329   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.514237   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:58.528266   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:58.539804   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:58.539880   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:58.555961   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.566195   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:58.575257   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.584192   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:58.593620   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:58.603021   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:58.612370   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:58.621502   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:58.630294   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:58.630368   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:58.640300   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:58.648626   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:58.742860   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:58.760057   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.760138   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:58.778296   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.793165   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:58.807402   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.818936   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.829930   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:58.849768   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.861249   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.876335   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:58.879342   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:58.887395   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:58.901271   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:59.012726   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:59.108627   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:59.108651   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:59.122518   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:59.224950   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:01.525196   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.300161441s)
	I1204 15:34:01.525275   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:34:01.537533   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:34:01.552928   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.564251   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:34:01.666308   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:34:01.762184   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.857672   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:34:01.871507   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.882955   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.972213   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:34:02.036955   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:34:02.037050   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:34:02.042796   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:34:02.042875   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:34:02.046431   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:34:02.073232   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:34:02.073324   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.089702   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.126985   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:34:02.168586   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:34:02.190567   20196 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1204 15:34:02.211577   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:34:02.211977   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:34:02.216597   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.226113   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:34:02.226314   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:02.226550   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.226577   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.238043   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58675
	I1204 15:34:02.238357   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.238749   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.238766   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.238998   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.239102   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:34:02.239217   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:02.239287   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:34:02.240505   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:34:02.240770   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.240796   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.252028   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58677
	I1204 15:34:02.252346   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.252700   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.252719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.252937   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.253032   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:34:02.253139   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.7
	I1204 15:34:02.253146   20196 certs.go:194] generating shared ca certs ...
	I1204 15:34:02.253156   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:02.253308   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:34:02.253362   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:34:02.253371   20196 certs.go:256] generating profile certs ...
	I1204 15:34:02.253468   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:34:02.253856   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.d946d3b4
	I1204 15:34:02.253925   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:34:02.253938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:34:02.253962   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:34:02.253983   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:34:02.254009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:34:02.254028   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:34:02.254046   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:34:02.254065   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:34:02.254082   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:34:02.254159   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:34:02.254203   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:34:02.254211   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:34:02.254246   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:34:02.254278   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:34:02.254310   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:34:02.254374   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:34:02.254409   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.254429   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.254447   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.254475   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:34:02.254562   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:34:02.254640   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:34:02.254716   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:34:02.254794   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:34:02.285982   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:34:02.289453   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:34:02.298834   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:34:02.302369   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:34:02.315418   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:34:02.318593   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:34:02.327312   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:34:02.330564   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:34:02.339456   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:34:02.342515   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:34:02.351231   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:34:02.354286   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:34:02.363156   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:34:02.384838   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:34:02.405926   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:34:02.426535   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:34:02.446742   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:34:02.466560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:34:02.486853   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:34:02.507184   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:34:02.528073   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:34:02.548964   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:34:02.569347   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:34:02.589426   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:34:02.603866   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:34:02.617657   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:34:02.631813   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:34:02.645494   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:34:02.659961   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:34:02.673777   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:34:02.687446   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:34:02.691739   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:34:02.700420   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.703973   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.704042   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.708497   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:34:02.717646   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:34:02.726542   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.729989   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.730041   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.734277   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:34:02.742686   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:34:02.751027   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754461   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754515   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.758843   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:34:02.767465   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:34:02.770903   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:34:02.776086   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:34:02.780679   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:34:02.785121   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:34:02.789654   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:34:02.794116   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:34:02.798756   20196 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1204 15:34:02.798834   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:34:02.798851   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:34:02.798902   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:34:02.811676   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:34:02.811716   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:34:02.811802   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:34:02.820056   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:34:02.820120   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:34:02.827634   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:34:02.840903   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:34:02.854283   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:34:02.867957   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:34:02.870915   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.880410   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:02.978715   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:02.992761   20196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:34:02.992956   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:03.013320   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:34:03.055094   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:03.162591   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:03.175308   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:34:03.175517   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:34:03.175556   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:34:03.175722   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.175774   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:03.175780   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.175788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.175793   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.177877   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.178182   20196 node_ready.go:49] node "ha-098000-m03" has status "Ready":"True"
	I1204 15:34:03.178191   20196 node_ready.go:38] duration metric: took 2.460684ms for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.178204   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:03.178249   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:03.178255   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.178261   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.178265   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.181589   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:03.187858   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:03.187917   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.187923   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.187928   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.187931   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.190071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.190536   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.190544   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.190550   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.190553   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.192357   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:03.689890   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.689913   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.689960   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.689970   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.692722   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.693137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.693145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.693150   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.693154   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.694862   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.188595   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.188612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.188618   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.188622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.190926   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.191442   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.191451   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.191457   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.191460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.193377   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.689410   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.689427   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.689433   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.689436   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.691829   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.692311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.692320   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.692326   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.692329   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.694756   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.188051   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.188069   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.188075   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.188079   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.190537   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.191234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.191244   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.191250   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.191254   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.193184   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:05.193754   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:05.689571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.689583   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.689589   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.689592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.692119   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.693045   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.693054   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.693060   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.693070   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.695078   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.188182   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.188196   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.188203   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.188206   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.190803   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:06.191335   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.191343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.191353   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.191358   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.193354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.688125   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.688144   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.688150   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.688153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.698567   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:34:06.699659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.699669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.699674   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.699678   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.702231   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.188129   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.188142   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.188149   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.188152   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.190314   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.190783   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.190793   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.190799   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.190803   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.192721   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.689429   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.689444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.689450   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.689453   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.691383   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.691809   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.691816   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.691822   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.691827   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.693593   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.693894   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:08.189338   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.189353   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.189361   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.189365   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.191565   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.192110   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.192118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.192124   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.192134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.193879   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:08.689140   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.689155   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.689194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.689198   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.691672   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.692190   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.692197   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.692203   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.692206   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.694257   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.189377   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.189396   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.189399   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.191765   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.192318   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.192326   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.192333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.192337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.194226   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:09.688422   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.688435   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.688441   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.688445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.690918   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.691538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.691546   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.691552   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.691556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.693405   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.188400   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.188426   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.188438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.188445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.191226   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.191923   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.191930   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.191936   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.191940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.193682   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.194054   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:10.689544   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.689566   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.689601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.689607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.692171   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.692830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.692842   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.692848   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.692852   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.694354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:11.188970   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.188983   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.188989   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.188992   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.193348   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:11.193835   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.193844   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.193850   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.193854   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.195899   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.688737   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.688752   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.688758   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.688761   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691007   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.691483   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.691491   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.691496   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691500   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.693198   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.188889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.188972   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.188986   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.188999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.192039   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:12.192581   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.192589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.192595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.192598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.194300   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.194673   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:12.688761   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.688869   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.688880   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.688888   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.691475   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:12.692022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.692029   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.692035   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.692039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.693737   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.190399   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.190424   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.190436   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.190441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.193795   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:13.194709   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.194717   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.194722   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.194725   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.196228   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.688349   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.688361   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.688367   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.688370   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.690278   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.690775   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.690783   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.690788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.690792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.692350   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.189443   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.189461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.189470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.189474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.191713   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:14.192328   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.192336   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.192341   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.192345   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.194132   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.689369   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.689471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.689487   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.689522   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.693058   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:14.693755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.693762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.693768   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.693771   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.695478   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.695986   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:15.189753   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.189777   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.189833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.189842   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.193300   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:15.193825   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.193835   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.193842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.193848   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.195490   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:15.688564   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.688589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.688600   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.688607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.691559   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:15.692137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.692145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.692152   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.692156   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.693792   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.188974   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.188991   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.188999   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.189003   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.191876   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.192266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.192273   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.192279   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.192283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.193909   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.689589   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.689601   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.689607   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.689609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.691735   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.692340   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.692348   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.692354   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.692364   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.694139   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.188693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.188719   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.188730   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.188737   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.192306   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.192880   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.192888   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.192893   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.192896   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.194607   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.194930   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:17.689803   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.689822   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.689833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.689840   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.692900   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.693582   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.693596   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.693600   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.695568   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.189872   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.189891   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.189903   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.189909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.193143   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.193659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.193669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.193677   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.193682   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.195539   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.689089   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.689110   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.689121   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.689128   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.692465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.693092   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.693099   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.693105   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.693109   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.694811   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.188836   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.188866   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.188885   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.188893   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191083   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:19.191481   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.191489   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.191494   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191498   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.193210   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.688920   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.689019   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.689034   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.689040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692204   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:19.692887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.692895   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.692901   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.694482   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.694834   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:20.189463   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.189482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.189495   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.189507   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.192820   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.193489   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.193497   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.193503   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.193506   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.195170   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:20.689312   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.689335   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.689345   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.689353   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.692898   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.693406   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.693413   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.693419   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.693435   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.695237   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.189479   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.189499   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.189511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.189519   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.192490   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.193119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.193127   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.193132   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.193136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.194670   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.689574   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.689589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.689595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.689598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.691684   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.692133   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.692140   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.692145   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.692156   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.694020   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.189311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.189327   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.189334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.189337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.191942   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.192424   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.192432   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.192438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.192441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.194080   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.194500   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:22.689269   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.689284   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.689293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.689297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.691724   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.692389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.692397   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.692404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.692407   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.694417   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.188903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.188937   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.188944   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.188948   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.191281   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.191769   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.191776   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.191783   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.191786   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.193597   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.689658   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.689673   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.689682   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.689688   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.692154   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.692597   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.692605   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.692611   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.692614   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.694442   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.190414   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.190439   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.190448   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.190453   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.193694   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:24.194336   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.194343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.194349   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.194352   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.196204   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.196507   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:24.689283   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.689324   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.689334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.689339   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.691786   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:24.692252   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.692260   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.692265   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.692269   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.694045   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.189972   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.189988   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.189995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.189997   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.192150   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.192590   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.192598   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.192604   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.192607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.194554   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.689840   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.689893   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.689902   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.689908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.692432   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.693530   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.693539   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.693545   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.693556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.695085   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.188685   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.188774   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.188787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.188792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.191478   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.191981   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.191990   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.191995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.191998   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.193972   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.689955   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.690060   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.690076   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.690084   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693025   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.693583   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.693596   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.695193   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.695569   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:27.190057   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.190079   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.190096   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.190102   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.193105   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:27.193849   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.193860   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.193868   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.193873   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.195538   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:27.688758   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.688772   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.688779   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.688783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.694666   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:27.695270   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.695278   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.695283   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.695288   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.696913   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.188770   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.188819   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.188832   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.188840   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.191808   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.192403   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.192411   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.192416   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.192420   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.194136   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.689405   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.689487   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.689503   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.689511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.694694   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:28.695230   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.695237   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.695243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.695246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.697820   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.698133   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:29.190106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.190125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.190138   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.190143   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.193071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:29.193687   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.193698   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.193706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.193711   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.195444   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:29.689830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.689849   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.689862   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.689867   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.692977   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:29.693745   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.693753   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.693759   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.693762   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.695525   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.190945   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.190965   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.190976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.190988   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.195195   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:30.195850   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.195859   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.195865   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.195869   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.197592   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.689476   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.689500   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.689510   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.689516   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.692808   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:30.693458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.693466   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.693471   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.693474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.695140   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.189274   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.189404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.189413   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.192545   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.193168   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.193179   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.193186   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.193193   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.194805   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.195157   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:31.690066   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.690125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.690139   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.693489   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.694073   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.694084   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.694093   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.694098   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.695789   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.190294   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.190315   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.190333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193258   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:32.193839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.193846   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.193852   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193856   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.195470   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.689113   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.689137   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.689148   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.689153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.692269   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:32.692828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.692836   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.692842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.692845   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.694429   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.188950   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.188969   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.188980   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.188987   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.191891   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:33.192381   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.192389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.192395   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.192400   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.194337   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.690112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.690134   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.690145   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.690153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.693581   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:33.694215   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.694223   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.694229   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.694232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.696177   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.696454   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:34.189881   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.189900   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.189912   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.189918   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193287   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.193886   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.193897   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.193909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193915   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.195881   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:34.689892   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.689916   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.689931   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.689940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.693606   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.694219   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.694227   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.694234   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.694237   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.696105   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.188973   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.189024   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.189039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.189046   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192172   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.192755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.192763   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.192769   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192772   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.194518   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.690180   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.690201   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.690214   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.690223   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.694006   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.694605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.694612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.694619   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.694622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.696235   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.696565   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:36.189741   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.189767   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.189779   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.189785   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.193344   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.194036   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.194047   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.194055   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.194059   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.195836   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:36.690199   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.690224   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.690236   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.690241   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.693462   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.694091   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.694102   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.694110   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.694116   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.695766   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:37.190287   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.190309   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.190320   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.196511   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:34:37.197043   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.197052   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.197058   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.197061   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.199818   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.690095   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.690118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.690129   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.690136   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.693801   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:37.694618   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.694626   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.694632   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.694636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.696670   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.697007   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:38.190293   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.190317   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.190329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.190338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.194628   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:38.195183   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.195190   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.195196   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.195201   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.197386   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:38.689866   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.689889   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.689900   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.689905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.693601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:38.694401   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.694412   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.694420   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.694426   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.696297   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:39.190990   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.191012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.191024   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.191031   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.198155   20196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 15:34:39.199473   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.199482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.199488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.199493   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.205055   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:39.690106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.690130   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.690142   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.693615   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:39.694445   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.694452   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.694458   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.694462   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.696222   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.189693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:40.189718   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.189731   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.189746   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.193370   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:40.194004   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.194012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.194018   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.194021   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.195604   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.195934   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:40.195944   20196 pod_ready.go:82] duration metric: took 37.007028934s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195952   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195984   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.195989   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.195995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.195999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.197711   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.198120   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.198128   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.198134   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.198136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.199690   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.696200   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.696219   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.696228   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.696232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.698719   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:40.699262   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.699270   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.699277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.699281   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.701563   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.196423   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.196440   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.196446   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.196449   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.199972   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:41.200435   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.200444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.200449   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.200454   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.202156   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:41.696302   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.696325   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.696334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.696376   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.698859   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.699465   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.699474   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.699480   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.699486   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.701569   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.197903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.197925   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.197937   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.197942   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.200867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.201412   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.201420   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.201427   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.201431   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.203130   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.203467   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:42.697162   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.697182   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.697194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.697200   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.700051   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.700562   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.700570   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.700576   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.700579   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.702701   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.703063   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.703073   20196 pod_ready.go:82] duration metric: took 2.507044671s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703080   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703116   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:34:42.703121   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.703129   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.703134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.705021   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.705585   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.705592   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.705598   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.705609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.707581   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.708069   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.708079   20196 pod_ready.go:82] duration metric: took 4.993321ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708086   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708121   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:34:42.708126   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.708131   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.708135   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.710061   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.710514   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:42.710522   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.710528   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.710532   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712173   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.712569   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.712578   20196 pod_ready.go:82] duration metric: took 4.485807ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712584   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712616   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:34:42.712621   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.712627   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712630   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714463   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.714960   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:42.714968   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.714976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714980   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.716756   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.717063   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.717072   20196 pod_ready.go:82] duration metric: took 4.482301ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717082   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:34:42.717116   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.717122   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.717126   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.718813   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.719178   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.719186   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.719192   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.719196   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.720739   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.721127   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.721135   20196 pod_ready.go:82] duration metric: took 4.047168ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.721141   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.898812   20196 request.go:632] Waited for 177.546709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898865   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898875   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.898884   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.898890   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.901957   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.097426   20196 request.go:632] Waited for 194.940606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097482   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097488   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.097494   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.097498   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.099791   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.100329   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.100338   20196 pod_ready.go:82] duration metric: took 379.181132ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.100345   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.297467   20196 request.go:632] Waited for 197.060564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297531   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297536   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.297542   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.297546   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.299888   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.498066   20196 request.go:632] Waited for 197.627847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498144   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498154   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.498165   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.498171   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.501495   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.501933   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.501946   20196 pod_ready.go:82] duration metric: took 401.584296ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.501955   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.697541   20196 request.go:632] Waited for 195.539974ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697609   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697614   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.697620   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.697624   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.699660   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.897896   20196 request.go:632] Waited for 197.715706ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897988   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897999   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.898011   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.898040   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.901116   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.901493   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.901504   20196 pod_ready.go:82] duration metric: took 399.531331ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.901511   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.097961   20196 request.go:632] Waited for 196.319346ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098031   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.098043   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.098052   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.101549   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.297607   20196 request.go:632] Waited for 195.557496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297743   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297756   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.297766   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.297776   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.301215   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.301821   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.301835   20196 pod_ready.go:82] duration metric: took 400.304316ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.301844   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.497418   20196 request.go:632] Waited for 195.52419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497540   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497551   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.497561   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.497567   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.500605   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.697803   20196 request.go:632] Waited for 196.768583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697874   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697880   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.697886   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.697892   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.699791   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:44.700181   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.700191   20196 pod_ready.go:82] duration metric: took 398.331303ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.700206   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.897582   20196 request.go:632] Waited for 197.274481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897621   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897628   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.897636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.897643   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.899968   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.098303   20196 request.go:632] Waited for 197.936546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.098481   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.098489   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.101906   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.102405   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.102418   20196 pod_ready.go:82] duration metric: took 402.19463ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.102429   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.297787   20196 request.go:632] Waited for 195.298622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297896   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297908   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.297918   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.297924   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.301224   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.497743   20196 request.go:632] Waited for 195.731374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497789   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497798   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.497808   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.497816   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.501296   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.501752   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.501764   20196 pod_ready.go:82] duration metric: took 399.314475ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.501772   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.698321   20196 request.go:632] Waited for 196.486057ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698364   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698368   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.698395   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.698400   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.700678   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.898338   20196 request.go:632] Waited for 197.154497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898437   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898445   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.898454   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.898460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.900811   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.901113   20196 pod_ready.go:98] node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901124   20196 pod_ready.go:82] duration metric: took 399.323564ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	E1204 15:34:45.901130   20196 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901136   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.098348   20196 request.go:632] Waited for 197.16954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098411   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098417   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.098423   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.098428   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.100807   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.298026   20196 request.go:632] Waited for 196.74762ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298086   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298092   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.298098   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.298103   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.300358   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.300719   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.300729   20196 pod_ready.go:82] duration metric: took 399.576022ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.300737   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.497896   20196 request.go:632] Waited for 197.086517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.497983   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.498051   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.498063   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.498071   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.501601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:46.698084   20196 request.go:632] Waited for 195.78543ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.698170   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.698177   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.700251   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.700719   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.700729   20196 pod_ready.go:82] duration metric: took 399.975386ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.700736   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.898629   20196 request.go:632] Waited for 197.83339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898748   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.898773   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.898783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.902413   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.099363   20196 request.go:632] Waited for 196.494986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099466   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099477   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.099488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.099495   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.102564   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.102986   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.102995   20196 pod_ready.go:82] duration metric: took 402.242621ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.103002   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.297846   20196 request.go:632] Waited for 194.795128ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297939   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.297949   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.297953   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.300484   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.498216   20196 request.go:632] Waited for 197.302267ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498358   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.498374   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.498381   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.501722   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.502017   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.502028   20196 pod_ready.go:82] duration metric: took 399.008512ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.502037   20196 pod_ready.go:39] duration metric: took 44.322579822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:47.502061   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:34:47.502149   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:47.513881   20196 api_server.go:72] duration metric: took 44.519844285s to wait for apiserver process to appear ...
	I1204 15:34:47.513892   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:34:47.513909   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:34:47.516967   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:34:47.517003   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:34:47.517008   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.517014   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.517018   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.517533   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:34:47.517562   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:34:47.517569   20196 api_server.go:131] duration metric: took 3.673154ms to wait for apiserver health ...
	I1204 15:34:47.517575   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:34:47.697569   20196 request.go:632] Waited for 179.954091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697611   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.697617   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.697621   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.702548   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:47.707779   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:34:47.707791   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:47.707795   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:47.707798   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:47.707801   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:47.707809   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:47.707813   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:47.707815   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:47.707818   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:47.707821   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:47.707823   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:47.707826   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:47.707830   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:47.707837   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:47.707841   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:47.707844   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:47.707846   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:47.707849   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:47.707851   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:47.707854   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:47.707857   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:47.707860   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:47.707862   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:47.707865   20196 system_pods.go:61] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:47.707867   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:47.707870   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:47.707874   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:47.707879   20196 system_pods.go:74] duration metric: took 190.294933ms to wait for pod list to return data ...
	I1204 15:34:47.707885   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:34:47.897357   20196 request.go:632] Waited for 189.411036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897446   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897455   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.897463   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.897470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.899736   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.899815   20196 default_sa.go:45] found service account: "default"
	I1204 15:34:47.899824   20196 default_sa.go:55] duration metric: took 191.920936ms for default service account to be created ...
	I1204 15:34:47.899831   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:34:48.097563   20196 request.go:632] Waited for 197.602094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097612   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097620   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.097663   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.097675   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.102765   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:48.109211   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:34:48.109362   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:48.109371   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:48.109375   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:48.109379   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:48.109383   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:48.109386   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:48.109389   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:48.109393   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:48.109396   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:48.109400   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:48.109403   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:48.109406   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:48.109409   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:48.109413   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:48.109417   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:48.109419   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:48.109422   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:48.109425   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:48.109428   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:48.109431   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:48.109434   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:48.109437   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:48.109439   20196 system_pods.go:89] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:48.109442   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:48.109445   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:48.109450   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:48.109455   20196 system_pods.go:126] duration metric: took 209.614349ms to wait for k8s-apps to be running ...
	I1204 15:34:48.109461   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:34:48.109531   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:34:48.120276   20196 system_svc.go:56] duration metric: took 10.810365ms WaitForService to wait for kubelet
	I1204 15:34:48.120291   20196 kubeadm.go:582] duration metric: took 45.126238068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:34:48.120303   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:34:48.297415   20196 request.go:632] Waited for 177.05913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297455   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.297469   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.297475   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.300123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:48.300830   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300840   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300847   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300850   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300860   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300862   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300866   20196 node_conditions.go:105] duration metric: took 180.554037ms to run NodePressure ...
	I1204 15:34:48.300874   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:34:48.300889   20196 start.go:255] writing updated cluster config ...
	I1204 15:34:48.322431   20196 out.go:201] 
	I1204 15:34:48.344449   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:48.344580   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.367119   20196 out.go:177] * Starting "ha-098000-m04" worker node in "ha-098000" cluster
	I1204 15:34:48.409090   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:34:48.409115   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:34:48.409244   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:34:48.409257   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:34:48.409347   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.410058   20196 start.go:360] acquireMachinesLock for ha-098000-m04: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:34:48.410126   20196 start.go:364] duration metric: took 51.472µs to acquireMachinesLock for "ha-098000-m04"
	I1204 15:34:48.410144   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:34:48.410150   20196 fix.go:54] fixHost starting: m04
	I1204 15:34:48.410455   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:48.410480   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:48.421860   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58681
	I1204 15:34:48.422147   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:48.422522   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:48.422541   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:48.422736   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:48.422817   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.422956   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:34:48.423067   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.423135   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 19762
	I1204 15:34:48.424293   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid 19762 missing from process table
	I1204 15:34:48.424344   20196 fix.go:112] recreateIfNeeded on ha-098000-m04: state=Stopped err=<nil>
	I1204 15:34:48.424356   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:34:48.424441   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:34:48.445040   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m04" ...
	I1204 15:34:48.535157   20196 main.go:141] libmachine: (ha-098000-m04) Calling .Start
	I1204 15:34:48.535373   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.535405   20196 main.go:141] libmachine: (ha-098000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid
	I1204 15:34:48.535476   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Using UUID 8502617a-13a7-430f-a6ae-7be776245ae1
	I1204 15:34:48.565169   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Generated MAC 7a:59:49:d0:f8:66
	I1204 15:34:48.565217   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:34:48.565376   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565411   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565471   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8502617a-13a7-430f-a6ae-7be776245ae1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:34:48.565528   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8502617a-13a7-430f-a6ae-7be776245ae1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:34:48.565552   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:34:48.566902   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Pid is 20252
	I1204 15:34:48.567481   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Attempt 0
	I1204 15:34:48.567496   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.567619   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:34:48.570453   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Searching for 7a:59:49:d0:f8:66 in /var/db/dhcpd_leases ...
	I1204 15:34:48.570536   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:34:48.570551   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f4f2}
	I1204 15:34:48.570574   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:34:48.570588   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:34:48.570605   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:34:48.570615   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found match: 7a:59:49:d0:f8:66
	I1204 15:34:48.570625   20196 main.go:141] libmachine: (ha-098000-m04) DBG | IP: 192.169.0.8
	I1204 15:34:48.570635   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetConfigRaw
	I1204 15:34:48.571737   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:48.571957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.572535   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:34:48.572555   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.572720   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:48.572824   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:48.572944   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573100   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573236   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:48.573428   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:48.573574   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:48.573582   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:34:48.578618   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:34:48.587514   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:34:48.588773   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:48.588818   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:48.588867   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:48.588887   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.021227   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:34:49.021251   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:34:49.136078   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:49.136099   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:49.136106   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:49.136115   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.136921   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:34:49.136930   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:34:54.890690   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:34:54.890729   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:34:54.890737   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:34:54.916069   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:34:59.632189   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:34:59.632205   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632363   20196 buildroot.go:166] provisioning hostname "ha-098000-m04"
	I1204 15:34:59.632375   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632472   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.632554   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.632630   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632721   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632816   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.633517   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.633682   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.633692   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m04 && echo "ha-098000-m04" | sudo tee /etc/hostname
	I1204 15:34:59.697622   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m04
	
	I1204 15:34:59.697639   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.697775   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.697886   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.697981   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.698057   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.698172   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.698298   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.698309   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:34:59.757369   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:59.757388   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:34:59.757401   20196 buildroot.go:174] setting up certificates
	I1204 15:34:59.757413   20196 provision.go:84] configureAuth start
	I1204 15:34:59.757421   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.757593   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:59.757706   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.757790   20196 provision.go:143] copyHostCerts
	I1204 15:34:59.757821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.757873   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:34:59.757878   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.758004   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:34:59.758235   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758271   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:34:59.758277   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758377   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:34:59.758555   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758595   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:34:59.758601   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758673   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:34:59.758840   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m04 san=[127.0.0.1 192.169.0.8 ha-098000-m04 localhost minikube]
	I1204 15:35:00.089781   20196 provision.go:177] copyRemoteCerts
	I1204 15:35:00.090065   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:35:00.090090   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.090250   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.090364   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.090440   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.090527   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:00.124202   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:35:00.124273   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:35:00.161213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:35:00.161289   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:35:00.180684   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:35:00.180757   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:35:00.200255   20196 provision.go:87] duration metric: took 442.820652ms to configureAuth
	I1204 15:35:00.200272   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:35:00.201095   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:35:00.201110   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:00.201255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.201346   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.201433   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201525   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201613   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.201739   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.201862   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.201869   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:35:00.254941   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:35:00.254954   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:35:00.255043   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:35:00.255055   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.255192   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.255284   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255363   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255444   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.255591   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.255723   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.255769   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:35:00.320168   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:35:00.320186   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.320331   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.320425   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320520   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320607   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.320759   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.320905   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.320920   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:35:01.894648   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:35:01.894665   20196 machine.go:96] duration metric: took 13.321743335s to provisionDockerMachine
	I1204 15:35:01.894674   20196 start.go:293] postStartSetup for "ha-098000-m04" (driver="hyperkit")
	I1204 15:35:01.894686   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:35:01.894699   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.894901   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:35:01.894920   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.895018   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.895119   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.895219   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.895309   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.930531   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:35:01.933734   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:35:01.933745   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:35:01.933830   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:35:01.934221   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:35:01.934229   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:35:01.934400   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:35:01.942635   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:35:01.962080   20196 start.go:296] duration metric: took 67.394691ms for postStartSetup
	I1204 15:35:01.962104   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.962295   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:35:01.962307   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.962392   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.962474   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.962566   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.962648   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.996347   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:35:01.996427   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:35:02.030032   20196 fix.go:56] duration metric: took 13.619496662s for fixHost
	I1204 15:35:02.030058   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.030197   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.030296   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030393   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030479   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.030637   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:02.030806   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:02.030817   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:35:02.085147   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355302.120673328
	
	I1204 15:35:02.085159   20196 fix.go:216] guest clock: 1733355302.120673328
	I1204 15:35:02.085164   20196 fix.go:229] Guest: 2024-12-04 15:35:02.120673328 -0800 PST Remote: 2024-12-04 15:35:02.030047 -0800 PST m=+128.947170547 (delta=90.626328ms)
	I1204 15:35:02.085182   20196 fix.go:200] guest clock delta is within tolerance: 90.626328ms
	I1204 15:35:02.085188   20196 start.go:83] releasing machines lock for "ha-098000-m04", held for 13.674670433s
	I1204 15:35:02.085206   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.085349   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:35:02.123833   20196 out.go:177] * Found network options:
	I1204 15:35:02.144638   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W1204 15:35:02.165506   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165534   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165554   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.165573   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166172   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166326   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:35:02.166492   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166507   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166517   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.166609   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:35:02.166623   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.166758   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.166911   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167036   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167085   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:35:02.167112   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.167158   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:02.167255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.167389   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167508   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167638   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	W1204 15:35:02.202034   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:35:02.202111   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:35:02.250167   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:35:02.250181   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.250263   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.264522   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:35:02.273699   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:35:02.283110   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.283199   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:35:02.292318   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.301397   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:35:02.310459   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.319592   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:35:02.328805   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:35:02.338084   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:35:02.347336   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:35:02.356538   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:35:02.364640   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:35:02.364708   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:35:02.374467   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:35:02.382987   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.482753   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:35:02.500374   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.500464   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:35:02.521212   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.537841   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:35:02.556887   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.568330   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.579634   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:35:02.599962   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.611341   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.627983   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:35:02.630940   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:35:02.638934   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:35:02.652587   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:35:02.752578   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:35:02.855546   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.855575   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:35:02.869623   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.966924   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:36:03.915497   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.946841873s)
	I1204 15:36:03.916405   20196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1204 15:36:03.950956   20196 out.go:201] 
	W1204 15:36:03.971878   20196 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:35:00 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640232708Z" level=info msg="Starting up"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640913001Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.641520029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.659694182Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677007859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677106781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677181167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677217787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677508761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677564998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677718553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677761182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677794548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677829672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677979478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.678361377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.679991465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680045979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680192561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680239332Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680562445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680612744Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684019168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684126285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684179264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684280902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684315598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684384845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684662040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684780718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684823731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684856490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684888664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684919549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684954923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684987161Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685018887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685064260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685101516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685133834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685178048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685213190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685243893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685277956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685310825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685342262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685371807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685438293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685477655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685510785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685541139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685570835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685612124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685654983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685694239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685725951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685757256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685828769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685873022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686013280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686053930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686084541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686114731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686150092Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686396292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686486749Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686550930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686589142Z" level=info msg="containerd successfully booted in 0.028291s"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.663269012Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.685002759Z" level=info msg="Loading containers: start."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.779781751Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.847897599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.892577077Z" level=info msg="Loading containers: done."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902420090Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902480737Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902498001Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902856617Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925683807Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925904543Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:35:01 ha-098000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.029030705Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.030916905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031062918Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031129826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031209544Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:35:03 ha-098000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 dockerd[1154]: time="2024-12-04T23:35:04.084800926Z" level=info msg="Starting up"
	Dec 04 23:36:04 ha-098000-m04 dockerd[1154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1204 15:36:03.971971   20196 out.go:270] * 
	W1204 15:36:03.973111   20196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:04.052589   20196 out.go:201] 
	
	
	==> Docker <==
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.480482933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.480524627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.480533412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.480623113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485304530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485378415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485392381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485468152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488796991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488864892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489011132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489159283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505144687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505534782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505591473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.506131239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584088576Z" level=info msg="shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584482016Z" level=warning msg="cleaning up after shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1151]: time="2024-12-04T23:34:37.584644745Z" level=info msg="ignoring event" container=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584822280Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.596833687Z" level=warning msg="cleanup warnings time=\"2024-12-04T23:34:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.455018691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456263444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456323640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456579989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	274afa9228625       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   566b4c12aa8e2       coredns-7c65d6cfc9-2z7lq
	9260f06aa6160       9ca7e41918271                                                                                         About a minute ago   Running             kindnet-cni               1                   1784deace7582       kindnet-c9zw7
	3aa9f0074ad24       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   ee7fb852b0746       busybox-7dff88458-tkk5l
	a4f10e7a31b1e       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   9544aac6431ee       coredns-7c65d6cfc9-75cm5
	4d500c5582d7e       505d571f5fd56                                                                                         About a minute ago   Running             kube-proxy                1                   e007c09acabae       kube-proxy-9strn
	59729ff8ece5d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   85942c1ee0c48       storage-provisioner
	06090b0373c28       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      1                   492043398c8f7       etcd-ha-098000
	832c9a15fccb2       847c7bc1a5418                                                                                         2 minutes ago        Running             kube-scheduler            1                   85cb9204adcbc       kube-scheduler-ha-098000
	28b6bc3009d9a       4b34defda8067                                                                                         2 minutes ago        Running             kube-vip                  0                   092f7a958b993       kube-vip-ha-098000
	3fbffe6ec740e       0486b6c53a1b5                                                                                         2 minutes ago        Running             kube-controller-manager   1                   d3d303d826e70       kube-controller-manager-ha-098000
	d11a51451327e       9499c9960544e                                                                                         2 minutes ago        Running             kube-apiserver            1                   2e4b3bead8edd       kube-apiserver-ha-098000
	91698004f45ac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   7e62e6836673c       busybox-7dff88458-tkk5l
	334347c0146ff       c69fa2e9cbf5f                                                                                         7 minutes ago        Exited              coredns                   0                   106dba456980c       coredns-7c65d6cfc9-75cm5
	d45b7ca2c321b       c69fa2e9cbf5f                                                                                         7 minutes ago        Exited              coredns                   0                   0af8351fa9e0d       coredns-7c65d6cfc9-2z7lq
	fdb9e4f5e8f3d       kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16              8 minutes ago        Exited              kindnet-cni               0                   9933ca421eee5       kindnet-c9zw7
	12aba82bb9eef       505d571f5fd56                                                                                         8 minutes ago        Exited              kube-proxy                0                   1d340d81fbfb5       kube-proxy-9strn
	542f42367b5c6       0486b6c53a1b5                                                                                         8 minutes ago        Exited              kube-controller-manager   0                   05f42a6061648       kube-controller-manager-ha-098000
	1a5a6b8eb38ec       847c7bc1a5418                                                                                         8 minutes ago        Exited              kube-scheduler            0                   08cbd5b0cfe57       kube-scheduler-ha-098000
	347bf5bfb2fe6       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      0                   b7d6e2da744bd       etcd-ha-098000
	671e22f525950       9499c9960544e                                                                                         8 minutes ago        Exited              kube-apiserver            0                   81c0cf31c7e46       kube-apiserver-ha-098000
	
	
	==> coredns [274afa922862] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44993 - 12483 "HINFO IN 5217430967915220008.4602414331418196309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026715635s
	
	
	==> coredns [334347c0146f] <==
	[INFO] 10.244.0.4:55981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000158655s
	[INFO] 10.244.0.4:42290 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098463s
	[INFO] 10.244.0.4:58242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058466s
	[INFO] 10.244.0.4:37059 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090224s
	[INFO] 10.244.3.2:34052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150969s
	[INFO] 10.244.3.2:48314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000048987s
	[INFO] 10.244.3.2:47597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004272s
	[INFO] 10.244.3.2:43130 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042338s
	[INFO] 10.244.3.2:40288 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040487s
	[INFO] 10.244.1.2:41974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126535s
	[INFO] 10.244.0.4:46586 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136951s
	[INFO] 10.244.3.2:49834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132271s
	[INFO] 10.244.3.2:35105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007496s
	[INFO] 10.244.3.2:46872 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103202s
	[INFO] 10.244.3.2:51001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043782s
	[INFO] 10.244.1.2:60852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151622s
	[INFO] 10.244.1.2:45169 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00010811s
	[INFO] 10.244.0.4:50794 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179037s
	[INFO] 10.244.0.4:33885 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091089s
	[INFO] 10.244.0.4:59078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080787s
	[INFO] 10.244.0.4:47967 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000331118s
	[INFO] 10.244.3.2:37401 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005625s
	[INFO] 10.244.3.2:58299 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00008056s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4f10e7a31b1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56042 - 51130 "HINFO IN 4860731135473207728.3302970177185641581. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.195382352s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[898011711]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30003ms):
	Trace[898011711]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[898011711]: [30.003839217s] [30.003839217s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[451941860]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.766) (total time: 30002ms):
	Trace[451941860]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:34:38.768)
	Trace[451941860]: [30.00227073s] [30.00227073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[957834387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30004ms):
	Trace[957834387]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[957834387]: [30.004945433s] [30.004945433s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d45b7ca2c321] <==
	[INFO] 10.244.1.2:58995 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.045573822s
	[INFO] 10.244.0.4:47628 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000074289s
	[INFO] 10.244.0.4:33651 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000655957s
	[INFO] 10.244.0.4:59923 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000433816s
	[INFO] 10.244.1.2:47489 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094853s
	[INFO] 10.244.1.2:60918 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109867s
	[INFO] 10.244.0.4:58795 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102995s
	[INFO] 10.244.0.4:56882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100778s
	[INFO] 10.244.0.4:41069 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155649s
	[INFO] 10.244.0.4:47261 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005694s
	[INFO] 10.244.3.2:57069 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00065513s
	[INFO] 10.244.3.2:45549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000047282s
	[INFO] 10.244.3.2:44245 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103531s
	[INFO] 10.244.1.2:39311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122238s
	[INFO] 10.244.1.2:35593 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174598s
	[INFO] 10.244.1.2:45158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007291s
	[INFO] 10.244.0.4:35211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106877s
	[INFO] 10.244.0.4:54591 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089769s
	[INFO] 10.244.0.4:59162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036611s
	[INFO] 10.244.1.2:49523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134823s
	[INFO] 10.244.1.2:54333 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139019s
	[INFO] 10.244.3.2:46351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095077s
	[INFO] 10.244.3.2:33059 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046925s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T15_27_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:27:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:28:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d2318e94e39401090f7022df3a380b0
	  System UUID:                70104c46-0000-0000-9279-8221d5ed18af
	  Boot ID:                    637a375b-a691-4a3e-8b6f-369766d12741
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tkk5l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 coredns-7c65d6cfc9-2z7lq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m16s
	  kube-system                 coredns-7c65d6cfc9-75cm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m16s
	  kube-system                 etcd-ha-098000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m23s
	  kube-system                 kindnet-c9zw7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m17s
	  kube-system                 kube-apiserver-ha-098000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-controller-manager-ha-098000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-proxy-9strn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-scheduler-ha-098000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-vip-ha-098000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m15s                  kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m28s)  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m28s)  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m28s)  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m21s                  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m21s                  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m21s                  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m18s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  NodeReady                7m57s                  kubelet          Node ha-098000 status is now: NodeReady
	  Normal  RegisteredNode           7m13s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  Starting                 2m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           2m24s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           116s                   node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	
	
	Name:               ha-098000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_28_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:29:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-098000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 050a31912ec64c378c8000c9ffa16f74
	  System UUID:                2486449a-0000-0000-8055-5ee234f7d16f
	  Boot ID:                    90b90eed-fa44-41ea-9bc0-c9160a359639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvhj6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 etcd-ha-098000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m19s
	  kube-system                 kindnet-w7mbs                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m22s
	  kube-system                 kube-apiserver-ha-098000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-controller-manager-ha-098000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kube-proxy-8dv6r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-scheduler-ha-098000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-vip-ha-098000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m17s                  kube-proxy       
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 3m56s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m22s (x8 over 7m22s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  7m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     7m22s (x7 over 7m22s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m22s (x8 over 7m22s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     7m21s                  cidrAllocator    Node ha-098000-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           7m18s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           7m13s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   NodeAllocatableEnforced  4m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 4m2s                   kubelet          Starting kubelet.
	  Warning  Rebooted                 4m1s                   kubelet          Node ha-098000-m02 has been rebooted, boot id: 68d7d994-2a07-4139-8dc9-8d63e0527a5a
	  Normal   NodeHasSufficientMemory  4m1s                   kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m1s                   kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m1s                   kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x7 over 2m36s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           2m24s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           116s                   node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	
	
	Name:               ha-098000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_30_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:34:03 +0000   Wed, 04 Dec 2024 23:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:34:03 +0000   Wed, 04 Dec 2024 23:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:34:03 +0000   Wed, 04 Dec 2024 23:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:34:03 +0000   Wed, 04 Dec 2024 23:30:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-098000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e68515224384447f8b991dc8a234a41a
	  System UUID:                eac240d6-0000-0000-830d-b844e6baedeb
	  Boot ID:                    8a55b884-8283-4e51-b29b-735e031af52b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xtv76                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 etcd-ha-098000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m3s
	  kube-system                 kindnet-cbqbd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m5s
	  kube-system                 kube-apiserver-ha-098000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-controller-manager-ha-098000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-rf4cp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-ha-098000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-vip-ha-098000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 119s                 kube-proxy       
	  Normal   Starting                 6m1s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     6m5s                 cidrAllocator    Node ha-098000-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet          Node ha-098000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet          Node ha-098000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m5s (x7 over 6m5s)  kubelet          Node ha-098000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                 node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   RegisteredNode           6m2s                 node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   RegisteredNode           5m57s                node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   RegisteredNode           3m53s                node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   RegisteredNode           2m25s                node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   RegisteredNode           2m24s                node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	  Normal   Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m3s                 kubelet          Node ha-098000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s                 kubelet          Node ha-098000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s                 kubelet          Node ha-098000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m3s                 kubelet          Node ha-098000-m03 has been rebooted, boot id: 8a55b884-8283-4e51-b29b-735e031af52b
	  Normal   RegisteredNode           116s                 node-controller  Node ha-098000-m03 event: Registered Node ha-098000-m03 in Controller
	
	
	Name:               ha-098000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_30_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:30:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:32:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-098000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62de52f960740ecbed3bac1b9967c23
	  System UUID:                8502430f-0000-0000-a6ae-7be776245ae1
	  Boot ID:                    2c58ff3e-7f5d-436d-bc58-b646d91cdd24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bktcq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m12s
	  kube-system                 kube-proxy-mz4q2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x2 over 5m12s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     5m12s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     5m12s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m12s (x2 over 5m12s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m12s (x2 over 5m12s)  kubelet          Node ha-098000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeReady                4m49s                  kubelet          Node ha-098000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m24s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           116s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-098000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.035548] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008017] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.831418] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000001] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 4 23:33] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.189224] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.252070] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.109203] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.970887] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.250804] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.104275] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.059539] kauditd_printk_skb: 135 callbacks suppressed
	[  +0.050691] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.385557] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.100797] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.107482] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.131343] systemd-fstab-generator[1397]: Ignoring "noauto" option for root device
	[  +0.416585] systemd-fstab-generator[1555]: Ignoring "noauto" option for root device
	[  +6.808599] kauditd_printk_skb: 178 callbacks suppressed
	[ +34.877094] kauditd_printk_skb: 40 callbacks suppressed
	[Dec 4 23:34] kauditd_printk_skb: 20 callbacks suppressed
	[ +30.099102] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [06090b0373c2] <==
	{"level":"warn","ts":"2024-12-04T23:33:54.178006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.277894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.348032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.355284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.372458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.374462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.377529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:54.382603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b8c6c7563d17d844","from":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-04T23:33:55.730062Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:33:55.730151Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:33:58.739587Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ba5f5cb2731bb4ee","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:33:58.740886Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ba5f5cb2731bb4ee","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:33:59.731284Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:33:59.731332Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:34:03.733030Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:34:03.733161Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ba5f5cb2731bb4ee","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:34:03.739776Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ba5f5cb2731bb4ee","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:34:03.740993Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ba5f5cb2731bb4ee","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-12-04T23:34:04.808860Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-04T23:34:04.809180Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.809788Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817560Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817933Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.822840Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-04T23:34:04.823089Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	
	
	==> etcd [347bf5bfb2fe] <==
	{"level":"info","ts":"2024-12-04T23:32:45.441636Z","caller":"traceutil/trace.go:171","msg":"trace[2067592358] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.930249208s","start":"2024-12-04T23:32:37.511384Z","end":"2024-12-04T23:32:45.441633Z","steps":["trace[2067592358] 'agreement among raft nodes before linearized reading'  (duration: 7.930237481s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:32:45.441645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:32:37.511358Z","time spent":"7.930284886s","remote":"127.0.0.1:53382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/12/04 23:32:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-04T23:32:45.469417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4c9eee5331caa173","rtt":"895.585µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.469450Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4c9eee5331caa173","rtt":"6.632061ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474990Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T23:32:45.475061Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-04T23:32:45.477612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477653Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477794Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477828Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477881Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477896Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.477902Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478035Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478876Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.484500Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484618Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-098000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 23:36:06 up 3 min,  0 users,  load average: 0.07, 0.13, 0.06
	Linux ha-098000 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9260f06aa616] <==
	I1204 23:35:30.721540       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:35:40.728953       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:35:40.729048       1 main.go:301] handling current node
	I1204 23:35:40.729129       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:35:40.729279       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:35:40.729641       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:35:40.729702       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:35:40.729974       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:35:40.730034       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:35:50.728940       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:35:50.729055       1 main.go:301] handling current node
	I1204 23:35:50.729090       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:35:50.729108       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:35:50.729688       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:35:50.729785       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:35:50.730373       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:35:50.730499       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.728959       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:36:00.729028       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:36:00.729792       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:00.729847       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.730298       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:00.730349       1 main.go:301] handling current node
	I1204 23:36:00.730594       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:00.730746       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fdb9e4f5e8f3] <==
	I1204 23:32:15.006102       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007657       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:25.007678       1 main.go:301] handling current node
	I1204 23:32:25.007687       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:25.007690       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:25.007809       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:25.007816       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007864       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:25.007868       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:35.003703       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:35.003856       1 main.go:301] handling current node
	I1204 23:32:35.003925       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:35.004015       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:35.004440       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:35.004559       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:35.004793       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:35.004877       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:45.006980       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:45.007018       1 main.go:301] handling current node
	I1204 23:32:45.007028       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:45.007068       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:45.007194       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:45.007199       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:45.010702       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:45.010735       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [671e22f52595] <==
	W1204 23:32:45.463180       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463233       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463290       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463996       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.465107       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465150       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465162       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465210       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465693       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465858       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470160       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470650       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470675       1 watcher.go:342] watch chan error: etcdserver: no leader
	W1204 23:32:45.470803       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.471772       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471810       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471812       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471821       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471830       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471831       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471841       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471842       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471779       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471852       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471789       1 watcher.go:342] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [d11a51451327] <==
	I1204 23:33:38.594218       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1204 23:33:38.687977       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1204 23:33:38.688296       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 23:33:38.691058       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1204 23:33:38.691545       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1204 23:33:38.691575       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1204 23:33:38.691653       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 23:33:38.692048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1204 23:33:38.694556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1204 23:33:38.694668       1 aggregator.go:171] initial CRD sync complete...
	I1204 23:33:38.694729       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 23:33:38.694758       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 23:33:38.694764       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:33:38.696202       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 23:33:38.697593       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1204 23:33:38.705769       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1204 23:33:38.717725       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1204 23:33:38.717774       1 policy_source.go:224] refreshing policies
	I1204 23:33:38.734833       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:33:38.808657       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:33:38.819037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1204 23:33:38.825290       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1204 23:33:39.595794       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1204 23:33:39.838860       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W1204 23:33:59.841208       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-controller-manager [3fbffe6ec740] <==
	I1204 23:33:54.370410       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:33:54.370577       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:03.668954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	I1204 23:34:04.550573       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.625921ms"
	I1204 23:34:04.550900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.075µs"
	I1204 23:34:06.876947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.441096ms"
	I1204 23:34:06.877241       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.171µs"
	I1204 23:34:08.609298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="50.56µs"
	I1204 23:34:09.665048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.583248ms"
	I1204 23:34:09.665304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.428µs"
	I1204 23:34:21.466481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:21.497671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:22.938128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:24.418330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.789µs"
	I1204 23:34:25.747119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:26.532160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:40.034377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.026µs"
	I1204 23:34:40.057548       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:40.057604       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:40.074632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.303024ms"
	I1204 23:34:40.074955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="276.731µs"
	I1204 23:34:42.740022       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:42.740245       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:42.779484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.083905ms"
	I1204 23:34:42.779600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.609µs"
	
	
	==> kube-controller-manager [542f42367b5c] <==
	I1204 23:30:54.887539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	E1204 23:30:54.955494       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04" podCIDRs=["10.244.5.0/24"]
	E1204 23:30:54.955551       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04"
	E1204 23:30:54.955659       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-098000-m04': failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1204 23:30:54.955704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:54.963682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.113954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.398651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:58.480353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039986       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-098000-m04"
	I1204 23:30:59.109649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.147948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.198931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:04.937478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.609283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.610373       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-098000-m04"
	I1204 23:31:17.617825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:18.441772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:25.412764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:32:05.990296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m02"
	I1204 23:32:06.869398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.356069ms"
	I1204 23:32:06.870323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.454µs"
	I1204 23:32:09.287240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.687088ms"
	I1204 23:32:09.288363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="1.04846ms"
	
	
	==> kube-proxy [12aba82bb9ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:27:51.161946       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:27:51.171777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:27:51.171971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:27:51.199877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:27:51.199962       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:27:51.199995       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:27:51.202350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:27:51.202766       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:27:51.202823       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:27:51.204709       1 config.go:199] "Starting service config controller"
	I1204 23:27:51.205031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:27:51.205184       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:27:51.205227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:27:51.206547       1 config.go:328] "Starting node config controller"
	I1204 23:27:51.206855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:27:51.305717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:27:51.305831       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:27:51.307064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d500c5582d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:34:08.809079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:34:08.830727       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:34:08.830876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:34:08.863318       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:34:08.863364       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:34:08.863390       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:34:08.866204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:34:08.866652       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:34:08.866681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:34:08.868711       1 config.go:199] "Starting service config controller"
	I1204 23:34:08.869077       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:34:08.869308       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:34:08.869337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:34:08.870512       1 config.go:328] "Starting node config controller"
	I1204 23:34:08.870544       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:34:08.970002       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:34:08.970040       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:34:08.970567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a5a6b8eb38e] <==
	I1204 23:27:45.983251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:30:28.192188       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	E1204 23:30:28.192284       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	I1204 23:30:28.192310       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvhj6" node="ha-098000-m02"
	E1204 23:30:28.227861       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-rlnh2 is already present in the active queue" pod="default/busybox-7dff88458-rlnh2"
	E1204 23:30:54.897693       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vtbzp" node="ha-098000-m04"
	E1204 23:30:54.897853       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-vtbzp"
	E1204 23:30:54.897931       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pdg7h" node="ha-098000-m04"
	E1204 23:30:54.897986       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-pdg7h"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x7xvx" node="ha-098000-m04"
	E1204 23:30:54.935544       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-x7xvx"
	E1204 23:30:54.936188       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.936258       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ff5e29d-8bdb-492f-8be8-65295fb7d83f(kube-system/kindnet-bktcq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bktcq"
	E1204 23:30:54.936329       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-bktcq"
	I1204 23:30:54.936384       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.935423       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mz4q2" node="ha-098000-m04"
	E1204 23:30:54.937674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-mz4q2"
	E1204 23:30:54.939537       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c066164d-5b0a-40ca-93b9-d13c732f8d23(kube-system/kube-proxy-rgp97) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rgp97"
	E1204 23:30:54.939583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-rgp97"
	I1204 23:30:54.939599       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	I1204 23:32:45.399421       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1204 23:32:45.401282       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:32:45.403399       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1204 23:32:45.416820       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [832c9a15fccb] <==
	I1204 23:33:19.647940       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:33:30.004268       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1204 23:33:30.004311       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:33:30.004317       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:33:38.634637       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:33:38.636924       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:33:38.643589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:33:38.644074       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:33:38.644906       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:33:38.645277       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:33:38.745790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:34:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 23:34:11 ha-098000 kubelet[1562]: I1204 23:34:11.441692    1562 scope.go:117] "RemoveContainer" containerID="063e96a3a34a40988f4f40dfe057ea95eddf5887753101ebead50d2f9d0677dc"
	Dec 04 23:34:24 ha-098000 kubelet[1562]: I1204 23:34:24.406988    1562 scope.go:117] "RemoveContainer" containerID="d45b7ca2c321bb88eb0207b6b8d2cc8e28c3a5dfeb3831e851f9d73934d05579"
	Dec 04 23:34:24 ha-098000 kubelet[1562]: E1204 23:34:24.407391    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7c65d6cfc9-2z7lq_kube-system(7e1e544e-4664-4d4f-b739-138f16245205)\"" pod="kube-system/coredns-7c65d6cfc9-2z7lq" podUID="7e1e544e-4664-4d4f-b739-138f16245205"
	Dec 04 23:34:37 ha-098000 kubelet[1562]: I1204 23:34:37.991375    1562 scope.go:117] "RemoveContainer" containerID="250af664d21e9666dd1560b702a1fb34f1134ffea691f4237a098865ae6ed4aa"
	Dec 04 23:34:37 ha-098000 kubelet[1562]: I1204 23:34:37.992392    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:34:37 ha-098000 kubelet[1562]: E1204 23:34:37.992685    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:34:39 ha-098000 kubelet[1562]: I1204 23:34:39.407493    1562 scope.go:117] "RemoveContainer" containerID="d45b7ca2c321bb88eb0207b6b8d2cc8e28c3a5dfeb3831e851f9d73934d05579"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: I1204 23:34:53.407334    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: E1204 23:34:53.407489    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: I1204 23:35:05.407398    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: E1204 23:35:05.407597    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:11 ha-098000 kubelet[1562]: E1204 23:35:11.434026    1562 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: I1204 23:35:18.406757    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: E1204 23:35:18.407152    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: I1204 23:35:31.407458    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: E1204 23:35:31.408574    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: I1204 23:35:42.407061    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: E1204 23:35:42.407183    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: I1204 23:35:54.407450    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: E1204 23:35:54.407806    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-098000 -n ha-098000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (222.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 node delete m03 -v=7 --alsologtostderr: (7.106444612s)
ha_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr: exit status 2 (388.539348ms)

                                                
                                                
-- stdout --
	ha-098000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:36:15.793132   20309 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:36:15.794939   20309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:15.794946   20309 out.go:358] Setting ErrFile to fd 2...
	I1204 15:36:15.794950   20309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:15.795170   20309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:36:15.795378   20309 out.go:352] Setting JSON to false
	I1204 15:36:15.795399   20309 mustload.go:65] Loading cluster: ha-098000
	I1204 15:36:15.795455   20309 notify.go:220] Checking for updates...
	I1204 15:36:15.796804   20309 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:36:15.796836   20309 status.go:174] checking status of ha-098000 ...
	I1204 15:36:15.797274   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.797322   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.809258   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58764
	I1204 15:36:15.809588   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.810002   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.810013   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.810233   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.810334   20309 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:36:15.810435   20309 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:15.810522   20309 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:36:15.811819   20309 status.go:371] ha-098000 host status = "Running" (err=<nil>)
	I1204 15:36:15.811839   20309 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:36:15.812113   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.812139   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.827120   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58766
	I1204 15:36:15.827494   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.827873   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.827884   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.828107   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.828211   20309 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:36:15.828313   20309 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:36:15.828575   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.828600   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.839702   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58768
	I1204 15:36:15.839998   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.840299   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.840307   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.840570   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.840700   20309 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:36:15.841403   20309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:36:15.841424   20309 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:36:15.841509   20309 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:36:15.841588   20309 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:36:15.841687   20309 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:36:15.841775   20309 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:36:15.877807   20309 ssh_runner.go:195] Run: systemctl --version
	I1204 15:36:15.882452   20309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:36:15.893457   20309 kubeconfig.go:125] found "ha-098000" server: "https://192.169.0.254:8443"
	I1204 15:36:15.893481   20309 api_server.go:166] Checking apiserver status ...
	I1204 15:36:15.893533   20309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:36:15.904720   20309 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup
	W1204 15:36:15.912802   20309 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1988/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:36:15.913283   20309 ssh_runner.go:195] Run: ls
	I1204 15:36:15.916450   20309 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1204 15:36:15.919495   20309 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1204 15:36:15.919510   20309 status.go:463] ha-098000 apiserver status = Running (err=<nil>)
	I1204 15:36:15.919518   20309 status.go:176] ha-098000 status: &{Name:ha-098000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:36:15.919534   20309 status.go:174] checking status of ha-098000-m02 ...
	I1204 15:36:15.919819   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.919842   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.931135   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58772
	I1204 15:36:15.931447   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.931772   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.931784   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.932006   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.932109   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:36:15.932202   20309 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:15.932287   20309 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:36:15.933498   20309 status.go:371] ha-098000-m02 host status = "Running" (err=<nil>)
	I1204 15:36:15.933507   20309 host.go:66] Checking if "ha-098000-m02" exists ...
	I1204 15:36:15.933777   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.933800   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.944923   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58774
	I1204 15:36:15.945236   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.945603   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.945619   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.945866   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.945976   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:36:15.946071   20309 host.go:66] Checking if "ha-098000-m02" exists ...
	I1204 15:36:15.946338   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:15.946362   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:15.957386   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58776
	I1204 15:36:15.957718   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:15.958078   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:15.958092   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:15.958329   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:15.958435   20309 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:36:15.958620   20309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:36:15.958632   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:36:15.958718   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:36:15.958815   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:36:15.958925   20309 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:36:15.959016   20309 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:36:15.986414   20309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:36:15.998108   20309 kubeconfig.go:125] found "ha-098000" server: "https://192.169.0.254:8443"
	I1204 15:36:15.998122   20309 api_server.go:166] Checking apiserver status ...
	I1204 15:36:15.998175   20309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:36:16.009856   20309 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2117/cgroup
	W1204 15:36:16.018174   20309 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:36:16.018228   20309 ssh_runner.go:195] Run: ls
	I1204 15:36:16.022212   20309 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1204 15:36:16.025137   20309 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1204 15:36:16.025148   20309 status.go:463] ha-098000-m02 apiserver status = Running (err=<nil>)
	I1204 15:36:16.025153   20309 status.go:176] ha-098000-m02 status: &{Name:ha-098000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:36:16.025169   20309 status.go:174] checking status of ha-098000-m04 ...
	I1204 15:36:16.025689   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:16.025710   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:16.037042   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58780
	I1204 15:36:16.037366   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:16.037684   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:16.037700   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:16.037935   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:16.038043   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:36:16.038137   20309 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:16.038227   20309 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:36:16.039494   20309 status.go:371] ha-098000-m04 host status = "Running" (err=<nil>)
	I1204 15:36:16.039502   20309 host.go:66] Checking if "ha-098000-m04" exists ...
	I1204 15:36:16.039796   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:16.039820   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:16.050907   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58782
	I1204 15:36:16.051220   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:16.051537   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:16.051548   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:16.051750   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:16.051865   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:36:16.051973   20309 host.go:66] Checking if "ha-098000-m04" exists ...
	I1204 15:36:16.052245   20309 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:16.052270   20309 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:16.063340   20309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58784
	I1204 15:36:16.063683   20309 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:16.064051   20309 main.go:141] libmachine: Using API Version  1
	I1204 15:36:16.064068   20309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:16.064293   20309 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:16.064389   20309 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:36:16.064535   20309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:36:16.064558   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:36:16.064647   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:36:16.064734   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:36:16.064812   20309 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:36:16.064898   20309 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:36:16.095479   20309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:36:16.105774   20309 status.go:176] ha-098000-m04 status: &{Name:ha-098000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-098000 -n ha-098000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 logs -n 25: (3.072072347s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m04 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp testdata/cp-test.txt                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000:/home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000 sudo cat                                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m02:/home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03:/home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m03 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-098000 node stop m02 -v=7                                                                                                 | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-098000 node start m02 -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000 -v=7                                                                                                       | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-098000 -v=7                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-098000 --wait=true -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	| node    | ha-098000 node delete m03 -v=7                                                                                               | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:36 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:32:53
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:32:53.124576   20196 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:53.124878   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.124886   20196 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:53.124892   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.125142   20196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:32:53.126967   20196 out.go:352] Setting JSON to false
	I1204 15:32:53.159313   20196 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5543,"bootTime":1733349630,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:32:53.159464   20196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:53.181549   20196 out.go:177] * [ha-098000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:32:53.224271   20196 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:53.224311   20196 notify.go:220] Checking for updates...
	I1204 15:32:53.267840   20196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:32:53.289126   20196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:32:53.310338   20196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:53.331010   20196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:32:53.352255   20196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:53.373929   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:53.374098   20196 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:53.374835   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.374907   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.386958   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58600
	I1204 15:32:53.387294   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.387686   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.387699   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.387905   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.388016   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.418809   20196 out.go:177] * Using the hyperkit driver based on existing profile
	I1204 15:32:53.461003   20196 start.go:297] selected driver: hyperkit
	I1204 15:32:53.461036   20196 start.go:901] validating driver "hyperkit" against &{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.461290   20196 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:53.461477   20196 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.461727   20196 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:32:53.473875   20196 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:32:53.481311   20196 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.481337   20196 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:32:53.486904   20196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:53.486942   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:32:53.486987   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:32:53.487059   20196 start.go:340] cluster config:
	{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.487162   20196 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.508071   20196 out.go:177] * Starting "ha-098000" primary control-plane node in "ha-098000" cluster
	I1204 15:32:53.529205   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:32:53.529292   20196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 15:32:53.529312   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:32:53.529537   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:32:53.529555   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:32:53.529727   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.530635   20196 start.go:360] acquireMachinesLock for ha-098000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:53.530735   20196 start.go:364] duration metric: took 76.824µs to acquireMachinesLock for "ha-098000"
	I1204 15:32:53.530765   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:32:53.530784   20196 fix.go:54] fixHost starting: 
	I1204 15:32:53.531293   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.531320   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.542703   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58602
	I1204 15:32:53.543046   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.543457   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.543473   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.543695   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.543798   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.543917   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:32:53.544005   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.544085   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 19294
	I1204 15:32:53.545215   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.545258   20196 fix.go:112] recreateIfNeeded on ha-098000: state=Stopped err=<nil>
	I1204 15:32:53.545275   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	W1204 15:32:53.545373   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:32:53.586803   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000" ...
	I1204 15:32:53.608028   20196 main.go:141] libmachine: (ha-098000) Calling .Start
	I1204 15:32:53.608287   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.608354   20196 main.go:141] libmachine: (ha-098000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid
	I1204 15:32:53.610773   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.610786   20196 main.go:141] libmachine: (ha-098000) DBG | pid 19294 is in state "Stopped"
	I1204 15:32:53.610801   20196 main.go:141] libmachine: (ha-098000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid...
	I1204 15:32:53.611292   20196 main.go:141] libmachine: (ha-098000) DBG | Using UUID 70106e4e-8082-4c46-9279-8221d5ed18af
	I1204 15:32:53.728648   20196 main.go:141] libmachine: (ha-098000) DBG | Generated MAC 46:3b:47:9c:31:41
	I1204 15:32:53.728673   20196 main.go:141] libmachine: (ha-098000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:32:53.728953   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.728996   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.729068   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70106e4e-8082-4c46-9279-8221d5ed18af", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:32:53.729113   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70106e4e-8082-4c46-9279-8221d5ed18af -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:32:53.729129   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:32:53.730591   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Pid is 20209
	I1204 15:32:53.731014   20196 main.go:141] libmachine: (ha-098000) DBG | Attempt 0
	I1204 15:32:53.731028   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.731114   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:32:53.732978   20196 main.go:141] libmachine: (ha-098000) DBG | Searching for 46:3b:47:9c:31:41 in /var/db/dhcpd_leases ...
	I1204 15:32:53.733030   20196 main.go:141] libmachine: (ha-098000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:32:53.733053   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:32:53.733076   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:32:53.733086   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:32:53.733096   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f369}
	I1204 15:32:53.733112   20196 main.go:141] libmachine: (ha-098000) DBG | Found match: 46:3b:47:9c:31:41
	I1204 15:32:53.733119   20196 main.go:141] libmachine: (ha-098000) DBG | IP: 192.169.0.5
	I1204 15:32:53.733163   20196 main.go:141] libmachine: (ha-098000) Calling .GetConfigRaw
	I1204 15:32:53.733987   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:32:53.734258   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.734730   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:32:53.734741   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.734939   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:32:53.735075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:32:53.735212   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735339   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735471   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:32:53.735700   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:32:53.735888   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:32:53.735897   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:32:53.741792   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:32:53.798085   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:32:53.799084   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:53.799132   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:53.799147   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:53.799159   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.212915   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:32:54.212930   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:32:54.327517   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:54.327538   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:54.327567   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:54.327585   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.328504   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:32:54.328518   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:00.053293   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:00.053310   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:00.053327   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:00.080441   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:04.805929   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:04.805956   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806123   20196 buildroot.go:166] provisioning hostname "ha-098000"
	I1204 15:33:04.806135   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806234   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.806337   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.806431   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806539   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806630   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.806774   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.806928   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.806937   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000 && echo "ha-098000" | sudo tee /etc/hostname
	I1204 15:33:04.881527   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000
	
	I1204 15:33:04.881546   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.881688   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.881782   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881867   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881972   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.882116   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.882259   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.882270   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:04.951908   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:04.951928   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:04.951941   20196 buildroot.go:174] setting up certificates
	I1204 15:33:04.951947   20196 provision.go:84] configureAuth start
	I1204 15:33:04.951953   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.952087   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:04.952194   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.952301   20196 provision.go:143] copyHostCerts
	I1204 15:33:04.952333   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952388   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:04.952396   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952514   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:04.952739   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952770   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:04.952775   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952846   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:04.953021   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953050   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:04.953054   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953117   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:04.953299   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000 san=[127.0.0.1 192.169.0.5 ha-098000 localhost minikube]
	I1204 15:33:05.029495   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:05.029569   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:05.029587   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.029725   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.029828   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.029935   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.030021   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:05.069556   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:05.069632   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:05.088502   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:05.088560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 15:33:05.107211   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:05.107270   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:05.127045   20196 provision.go:87] duration metric: took 175.080758ms to configureAuth
	I1204 15:33:05.127060   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:05.127241   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:05.127255   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:05.127390   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.127495   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.127590   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127810   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.127983   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.128112   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.128119   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:05.194828   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:05.194840   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:05.194934   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:05.194945   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.195075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.195184   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195275   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195365   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.195540   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.195677   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.195720   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:05.269411   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:05.269434   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.269574   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.269679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269784   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269878   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.270029   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.270180   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.270192   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:06.947784   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:06.947801   20196 machine.go:96] duration metric: took 13.212685267s to provisionDockerMachine
	I1204 15:33:06.947813   20196 start.go:293] postStartSetup for "ha-098000" (driver="hyperkit")
	I1204 15:33:06.947820   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:06.947830   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:06.948036   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:06.948057   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:06.948150   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:06.948258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:06.948370   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:06.948484   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:06.990689   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:06.994074   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:06.994089   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:06.994206   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:06.994349   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:06.994356   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:06.994521   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:07.005479   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:07.040997   20196 start.go:296] duration metric: took 93.160395ms for postStartSetup
	I1204 15:33:07.041019   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.041214   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:07.041227   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.041320   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.041401   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.041488   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.041577   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.079449   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:07.079522   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:07.131796   20196 fix.go:56] duration metric: took 13.600616251s for fixHost
	I1204 15:33:07.131819   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.131964   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.132056   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132147   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.132400   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:07.132541   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:07.132548   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:07.198066   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355187.085615924
	
	I1204 15:33:07.198080   20196 fix.go:216] guest clock: 1733355187.085615924
	I1204 15:33:07.198085   20196 fix.go:229] Guest: 2024-12-04 15:33:07.085615924 -0800 PST Remote: 2024-12-04 15:33:07.131808 -0800 PST m=+14.052161483 (delta=-46.192076ms)
	I1204 15:33:07.198107   20196 fix.go:200] guest clock delta is within tolerance: -46.192076ms
	I1204 15:33:07.198113   20196 start.go:83] releasing machines lock for "ha-098000", held for 13.666979222s
	I1204 15:33:07.198132   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198272   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:07.198375   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198673   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198785   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198878   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:07.198921   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.198947   20196 ssh_runner.go:195] Run: cat /version.json
	I1204 15:33:07.198968   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.199026   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199093   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199123   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199209   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199228   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199298   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.199315   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199396   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.233868   20196 ssh_runner.go:195] Run: systemctl --version
	I1204 15:33:07.278985   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:33:07.283423   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:07.283478   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:07.298510   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:07.298524   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.298651   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.315201   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:07.324137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:07.332963   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.333027   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:07.341883   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.350757   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:07.359678   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.368612   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:07.377607   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:07.386447   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:07.395124   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:07.404070   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:07.412097   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:07.412157   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:07.421208   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:07.429418   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.524346   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:07.542570   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.542668   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:07.559288   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.569950   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:07.583434   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.593916   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.603881   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:07.624337   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.634820   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.649640   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:07.652619   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:07.659817   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:07.673288   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:07.772876   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:07.878665   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.878744   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:07.892585   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.986161   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:10.248338   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.262094537s)
	I1204 15:33:10.248412   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:10.259004   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:10.272350   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.282710   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:10.373201   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:10.481588   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.590503   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:10.604294   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.614461   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.704083   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:10.769517   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:10.769615   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:10.774192   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:10.774266   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:10.777449   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:10.800815   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:10.800899   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.817205   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.856841   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:10.856890   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:10.857354   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:10.862069   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:10.871775   20196 kubeadm.go:883] updating cluster {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 15:33:10.871875   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:10.871949   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.885784   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.885796   20196 docker.go:619] Images already preloaded, skipping extraction
	I1204 15:33:10.885882   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.904423   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.904444   20196 cache_images.go:84] Images are preloaded, skipping loading
	I1204 15:33:10.904450   20196 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1204 15:33:10.904531   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:10.904612   20196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:33:10.937949   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:33:10.937963   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:33:10.937974   20196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:33:10.938009   20196 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-098000 NodeName:ha-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:33:10.938085   20196 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-098000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:33:10.938101   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:10.938174   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:10.950599   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:10.950678   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:10.950747   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:10.959008   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:10.959066   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 15:33:10.966355   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1204 15:33:10.979785   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:10.993124   20196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1204 15:33:11.007280   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:11.020699   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:11.023569   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:11.032639   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:11.133629   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:11.148832   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.5
	I1204 15:33:11.148845   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:11.148855   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.149029   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:11.149085   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:11.149095   20196 certs.go:256] generating profile certs ...
	I1204 15:33:11.149184   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:11.149204   20196 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330
	I1204 15:33:11.149219   20196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1204 15:33:11.369000   20196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 ...
	I1204 15:33:11.369023   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330: {Name:mkee72feeeccd665b141717d3a28fdfb2c7bde31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369371   20196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 ...
	I1204 15:33:11.369381   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330: {Name:mk73951855cf52179c105169e788f46cc4d39a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369660   20196 certs.go:381] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt
	I1204 15:33:11.369853   20196 certs.go:385] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key
	I1204 15:33:11.370068   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:11.370078   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:11.370100   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:11.370120   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:11.370139   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:11.370157   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:11.370176   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:11.370196   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:11.370213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:11.370295   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:11.370331   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:11.370340   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:11.370387   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:11.370418   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:11.370453   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:11.370519   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:11.370552   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.370573   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.370591   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.371058   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:11.399000   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:11.441701   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:11.476788   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:11.508692   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:11.528963   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:11.548308   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:11.567414   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:11.586589   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:11.605437   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:11.624356   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:11.643314   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:33:11.656890   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:11.661063   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:11.670050   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673329   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673378   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.677431   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:11.686327   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:11.695205   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698569   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698616   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.702683   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:11.711573   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:11.720441   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723730   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723772   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.727893   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:11.736772   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:11.740128   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:11.744800   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:11.749129   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:11.753890   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:11.758287   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:11.762608   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:11.766918   20196 kubeadm.go:392] StartCluster: {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:11.767041   20196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:33:11.779240   20196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:33:11.787479   20196 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:33:11.787491   20196 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:33:11.787539   20196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:33:11.796840   20196 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:33:11.797140   20196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-098000" does not appear in /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.797223   20196 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-17258/kubeconfig needs updating (will repair): [kubeconfig missing "ha-098000" cluster setting kubeconfig missing "ha-098000" context setting]
	I1204 15:33:11.797420   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.797819   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.798024   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:33:11.798341   20196 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 15:33:11.798533   20196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:33:11.806274   20196 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1204 15:33:11.806292   20196 kubeadm.go:597] duration metric: took 18.792967ms to restartPrimaryControlPlane
	I1204 15:33:11.806299   20196 kubeadm.go:394] duration metric: took 39.384435ms to StartCluster
	I1204 15:33:11.806313   20196 settings.go:142] acquiring lock: {Name:mk99ad63e4feda725ee10448138b299c26bf8cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.806400   20196 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.806790   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.807009   20196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:11.807022   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:11.807035   20196 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:33:11.807145   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.850133   20196 out.go:177] * Enabled addons: 
	I1204 15:33:11.871157   20196 addons.go:510] duration metric: took 64.116535ms for enable addons: enabled=[]
	I1204 15:33:11.871244   20196 start.go:246] waiting for cluster config update ...
	I1204 15:33:11.871256   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:11.894284   20196 out.go:201] 
	I1204 15:33:11.915277   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.915378   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.939339   20196 out.go:177] * Starting "ha-098000-m02" control-plane node in "ha-098000" cluster
	I1204 15:33:11.981186   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:11.981222   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:11.981421   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:11.981442   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:11.981558   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.982398   20196 start.go:360] acquireMachinesLock for ha-098000-m02: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:11.982475   20196 start.go:364] duration metric: took 58.776µs to acquireMachinesLock for "ha-098000-m02"
	I1204 15:33:11.982495   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:11.982501   20196 fix.go:54] fixHost starting: m02
	I1204 15:33:11.982818   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:11.982845   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:11.994288   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58624
	I1204 15:33:11.994640   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:11.995007   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:11.995021   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:11.995253   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:11.995373   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:11.995490   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:33:11.995578   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:11.995648   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20139
	I1204 15:33:11.996810   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:11.996835   20196 fix.go:112] recreateIfNeeded on ha-098000-m02: state=Stopped err=<nil>
	I1204 15:33:11.996847   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	W1204 15:33:11.996942   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:12.039213   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m02" ...
	I1204 15:33:12.060086   20196 main.go:141] libmachine: (ha-098000-m02) Calling .Start
	I1204 15:33:12.060346   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.060380   20196 main.go:141] libmachine: (ha-098000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid
	I1204 15:33:12.061608   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:12.061617   20196 main.go:141] libmachine: (ha-098000-m02) DBG | pid 20139 is in state "Stopped"
	I1204 15:33:12.061626   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid...
	I1204 15:33:12.061806   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Using UUID 2486faac-afab-449a-8055-5ee234f7d16f
	I1204 15:33:12.086653   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Generated MAC b2:39:f5:23:0b:32
	I1204 15:33:12.086676   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:12.086820   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086851   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086887   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2486faac-afab-449a-8055-5ee234f7d16f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:12.086920   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2486faac-afab-449a-8055-5ee234f7d16f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:12.086929   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:12.088450   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Pid is 20220
	I1204 15:33:12.088937   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Attempt 0
	I1204 15:33:12.088953   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.089027   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:33:12.090875   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Searching for b2:39:f5:23:0b:32 in /var/db/dhcpd_leases ...
	I1204 15:33:12.090963   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:12.090982   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:12.091003   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:12.091026   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:33:12.091037   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found match: b2:39:f5:23:0b:32
	I1204 15:33:12.091047   20196 main.go:141] libmachine: (ha-098000-m02) DBG | IP: 192.169.0.6
	I1204 15:33:12.091078   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetConfigRaw
	I1204 15:33:12.091745   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:12.091957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:12.092493   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:12.092503   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:12.092649   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:12.092776   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:12.092901   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093004   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093096   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:12.093267   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:12.093463   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:12.093473   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:12.099465   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:12.108663   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:12.109633   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.109661   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.109674   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.109689   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.508437   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:12.508452   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:12.623247   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.623267   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.623283   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.623289   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.624086   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:12.624095   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:18.362951   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 15:33:18.362990   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 15:33:18.362997   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 15:33:18.387781   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 15:33:23.149238   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:23.149254   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149403   20196 buildroot.go:166] provisioning hostname "ha-098000-m02"
	I1204 15:33:23.149415   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149509   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.149612   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.149697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149796   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149882   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.150012   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.150165   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.150173   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m02 && echo "ha-098000-m02" | sudo tee /etc/hostname
	I1204 15:33:23.207677   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m02
	
	I1204 15:33:23.207693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.207831   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.207942   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208053   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208156   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.208340   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.208503   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.208515   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:23.265398   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:23.265414   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:23.265426   20196 buildroot.go:174] setting up certificates
	I1204 15:33:23.265434   20196 provision.go:84] configureAuth start
	I1204 15:33:23.265443   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.265604   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:23.265696   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.265792   20196 provision.go:143] copyHostCerts
	I1204 15:33:23.265821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.265868   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:23.265874   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.266044   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:23.266308   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266347   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:23.266352   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266606   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:23.266780   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266810   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:23.266815   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266891   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:23.267067   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m02 san=[127.0.0.1 192.169.0.6 ha-098000-m02 localhost minikube]
	I1204 15:33:23.418588   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:23.418649   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:23.418663   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.418794   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.418895   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.418994   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.419094   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:23.449777   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:23.449845   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:23.469736   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:23.469808   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:23.489512   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:23.489573   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:23.509353   20196 provision.go:87] duration metric: took 243.902721ms to configureAuth
	I1204 15:33:23.509367   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:23.509536   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:23.509550   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:23.509693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.509787   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.509886   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.509981   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.510059   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.510190   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.510321   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.510328   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:23.557917   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:23.557929   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:23.558018   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:23.558034   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.558154   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.558255   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558337   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558428   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.558600   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.558722   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.558764   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:23.619577   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:23.619599   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.619741   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.619853   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.619941   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.620042   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.620196   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.620336   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.620348   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:25.265062   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:25.265078   20196 machine.go:96] duration metric: took 13.172205227s to provisionDockerMachine
	I1204 15:33:25.265092   20196 start.go:293] postStartSetup for "ha-098000-m02" (driver="hyperkit")
	I1204 15:33:25.265099   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:25.265111   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.265311   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:25.265332   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.265441   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.265529   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.265633   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.265739   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.304266   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:25.311180   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:25.311193   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:25.311283   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:25.311424   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:25.311431   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:25.311607   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:25.324859   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:25.357942   20196 start.go:296] duration metric: took 92.839826ms for postStartSetup
	I1204 15:33:25.357966   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.358160   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:25.358173   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.358261   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.358352   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.358436   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.358521   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.389685   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:25.389754   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:25.422337   20196 fix.go:56] duration metric: took 13.439453986s for fixHost
	I1204 15:33:25.422364   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.422533   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.422647   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422735   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422815   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.422958   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:25.423099   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:25.423107   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:25.472632   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355205.621764225
	
	I1204 15:33:25.472647   20196 fix.go:216] guest clock: 1733355205.621764225
	I1204 15:33:25.472652   20196 fix.go:229] Guest: 2024-12-04 15:33:25.621764225 -0800 PST Remote: 2024-12-04 15:33:25.422353 -0800 PST m=+32.342189685 (delta=199.411225ms)
	I1204 15:33:25.472663   20196 fix.go:200] guest clock delta is within tolerance: 199.411225ms
	I1204 15:33:25.472667   20196 start.go:83] releasing machines lock for "ha-098000-m02", held for 13.489803052s
	I1204 15:33:25.472697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.472837   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:25.496277   20196 out.go:177] * Found network options:
	I1204 15:33:25.537194   20196 out.go:177]   - NO_PROXY=192.169.0.5
	W1204 15:33:25.558335   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.558422   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559432   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559728   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559899   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:25.559950   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	W1204 15:33:25.560026   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.560173   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:25.560212   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.560218   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560413   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560435   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560588   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560653   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560755   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560803   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.560929   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	W1204 15:33:25.589676   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:25.589750   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:25.635633   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:25.635654   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.635765   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.651707   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:25.660095   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:25.668588   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:25.668650   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:25.676830   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.685079   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:25.693509   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.701733   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:25.710137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:25.718450   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:25.726929   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:25.735114   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:25.742569   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:25.742622   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:25.751585   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:25.759751   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:25.851537   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:25.870178   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.870261   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:25.886777   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.898631   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:25.915954   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.927090   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.937345   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:25.958314   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.968609   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.983636   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:25.986491   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:25.993508   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:26.006712   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:26.100912   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:26.190828   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:26.190859   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:26.204976   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:26.305524   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:28.666691   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.361082583s)
	I1204 15:33:28.666774   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:28.677849   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:28.691293   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:28.702315   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:28.804235   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:28.895456   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.008598   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:29.022244   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:29.033285   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.123647   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:29.194113   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:29.194213   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:29.198266   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:29.198329   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:29.201217   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:29.226480   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:29.226574   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.245410   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.286251   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:29.327924   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:33:29.348859   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:29.349296   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:29.353761   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.363356   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:33:29.363524   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:29.363748   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.363768   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.374807   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58646
	I1204 15:33:29.375120   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.375473   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.375491   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.375697   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.375799   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:33:29.375885   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:29.375946   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:33:29.377121   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:33:29.377369   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.377393   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.388419   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58648
	I1204 15:33:29.388721   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.389015   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.389049   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.389281   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.389378   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:29.389495   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.6
	I1204 15:33:29.389501   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:29.389513   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:29.389656   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:29.389710   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:29.389719   20196 certs.go:256] generating profile certs ...
	I1204 15:33:29.389811   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:29.389878   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.3ecf7e1a
	I1204 15:33:29.389931   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:29.389938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:29.389964   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:29.389985   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:29.390009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:29.390029   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:29.390048   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:29.390067   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:29.390086   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:29.390163   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:29.390207   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:29.390215   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:29.390250   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:29.390285   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:29.390316   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:29.390382   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:29.390418   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.390439   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.390458   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.390483   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:29.390568   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:29.390658   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:29.390751   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:29.390833   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:29.422140   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:33:29.425696   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:33:29.434269   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:33:29.437377   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:33:29.446042   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:33:29.449183   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:33:29.457490   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:33:29.460647   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:33:29.469352   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:33:29.472755   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:33:29.481093   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:33:29.484099   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:33:29.492651   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:29.513068   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:29.533396   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:29.553633   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:29.573360   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:29.592833   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:29.612325   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:29.631705   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:29.651772   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:29.671647   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:29.691028   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:29.710680   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:33:29.724088   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:33:29.738048   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:33:29.751781   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:33:29.765280   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:33:29.779127   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:33:29.792641   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:33:29.806335   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:29.810643   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:29.819095   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822486   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822534   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.826729   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:29.835308   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:29.843890   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847451   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847503   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.851708   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:29.859922   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:29.868147   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871612   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871654   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.875808   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:29.884074   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:29.887539   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:29.891899   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:29.896170   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:29.900557   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:29.904814   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:29.909235   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:29.913504   20196 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1204 15:33:29.913564   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:29.913578   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:29.913625   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:29.926130   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:29.926164   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:29.926229   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:29.933952   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:29.934013   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:33:29.941532   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:33:29.955276   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:29.968570   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:29.982327   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:29.985248   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.994738   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.085095   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.100297   20196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:30.100505   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:30.121980   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:33:30.163546   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.296003   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.317056   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:30.317267   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:33:30.317312   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:33:30.317488   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:30.317571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:30.317576   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:30.317583   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:30.317592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.429719   20196 round_trippers.go:574] Response Status: 200 OK in 8111 milliseconds
	I1204 15:33:38.437420   20196 node_ready.go:49] node "ha-098000-m02" has status "Ready":"True"
	I1204 15:33:38.437441   20196 node_ready.go:38] duration metric: took 8.119707596s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:38.437450   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:38.437502   20196 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:33:38.437515   20196 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:33:38.437571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:38.437578   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.437593   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.437599   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.455661   20196 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 15:33:38.464148   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.464210   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:33:38.464215   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.464221   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.464224   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.470699   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.471292   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.471302   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.471308   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.471312   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.481534   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:33:38.481959   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.481970   20196 pod_ready.go:82] duration metric: took 17.803771ms for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.481977   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.482020   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:33:38.482026   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.482032   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.482035   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.487605   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.488267   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.488322   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.488329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.488343   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.490575   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.491180   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.491192   20196 pod_ready.go:82] duration metric: took 9.208421ms for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491202   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491280   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:33:38.491287   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.491293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.491297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.494530   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:38.495165   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.495173   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.495180   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.495184   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.499549   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.499961   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.499972   20196 pod_ready.go:82] duration metric: took 8.763238ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.499980   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.500023   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:33:38.500028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.500034   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.500039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.506409   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.506828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:38.506837   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.506843   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.506846   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.511940   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.512316   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.512327   20196 pod_ready.go:82] duration metric: took 12.340986ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512334   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512373   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:33:38.512378   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.512384   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.512389   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.516730   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.638087   20196 request.go:632] Waited for 120.794515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638124   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638130   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.638161   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.638169   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.640203   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.640614   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.640625   20196 pod_ready.go:82] duration metric: took 128.282ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.640638   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.838617   20196 request.go:632] Waited for 197.931176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838688   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838697   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.838706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.838712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.840867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.037679   20196 request.go:632] Waited for 196.178205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037714   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037719   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.037772   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.037777   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.042421   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:39.042726   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.042736   20196 pod_ready.go:82] duration metric: took 402.080499ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.042743   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.237786   20196 request.go:632] Waited for 195.001118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237820   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237825   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.237830   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.237835   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.243495   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:39.437668   20196 request.go:632] Waited for 193.740455ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437706   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.437712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.437719   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.440123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.440472   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.440482   20196 pod_ready.go:82] duration metric: took 397.72282ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.440490   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.638172   20196 request.go:632] Waited for 197.630035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638227   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638235   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.638277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.638301   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.641465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.837863   20196 request.go:632] Waited for 195.844278ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837914   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837923   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.838008   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.838017   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.841077   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.841414   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.841423   20196 pod_ready.go:82] duration metric: took 400.91619ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.841431   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.037805   20196 request.go:632] Waited for 196.32052ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037845   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.037851   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.037857   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.040255   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.238963   20196 request.go:632] Waited for 198.140778ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.239040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.239045   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.242092   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:40.242401   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.242411   20196 pod_ready.go:82] duration metric: took 400.963216ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.242419   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.438693   20196 request.go:632] Waited for 196.229899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438729   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438735   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.438741   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.438745   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.441139   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.637709   20196 request.go:632] Waited for 196.13524ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637752   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637777   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.637783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.637787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.640278   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.640704   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.640714   20196 pod_ready.go:82] duration metric: took 398.278068ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.640722   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.838825   20196 request.go:632] Waited for 198.055929ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838901   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838908   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.838927   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.838932   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.841541   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.037964   20196 request.go:632] Waited for 195.880635ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038037   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038043   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.038049   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.038054   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.041754   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.042231   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.042241   20196 pod_ready.go:82] duration metric: took 401.502224ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.042248   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.237873   20196 request.go:632] Waited for 195.582123ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237946   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237952   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.237957   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.237961   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.240730   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.438126   20196 request.go:632] Waited for 196.947205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438157   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438167   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.438207   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.438212   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.440777   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.441074   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.441084   20196 pod_ready.go:82] duration metric: took 398.818652ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.441091   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.639164   20196 request.go:632] Waited for 198.003801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639309   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639320   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.639331   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.639338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.643045   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.838863   20196 request.go:632] Waited for 195.192063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838912   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838924   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.838946   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.838954   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.842314   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.842750   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.842763   20196 pod_ready.go:82] duration metric: took 401.652541ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.842771   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.039281   20196 request.go:632] Waited for 196.459472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039417   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039428   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.039439   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.039447   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.042816   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.238811   20196 request.go:632] Waited for 195.378249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238885   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238891   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.238898   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.238903   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.240764   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.241072   20196 pod_ready.go:93] pod "kube-proxy-mz4q2" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.241084   20196 pod_ready.go:82] duration metric: took 398.294263ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.241092   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.438843   20196 request.go:632] Waited for 197.705446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438898   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.438905   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.438908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.440868   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.638818   20196 request.go:632] Waited for 197.361352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638884   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638895   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.638906   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.638914   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.642158   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.642556   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.642569   20196 pod_ready.go:82] duration metric: took 401.459636ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.642580   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.839526   20196 request.go:632] Waited for 196.890487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839713   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.839724   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.839732   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.843198   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.037789   20196 request.go:632] Waited for 194.105591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037944   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037961   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.037975   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.037982   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.041343   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.041920   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.041933   20196 pod_ready.go:82] duration metric: took 399.3347ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.041942   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.239892   20196 request.go:632] Waited for 197.874831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239961   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239969   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.239983   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.239991   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.243085   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.438099   20196 request.go:632] Waited for 194.176391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438141   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438168   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.438176   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.438185   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.440115   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:43.440578   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.440586   20196 pod_ready.go:82] duration metric: took 398.625667ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.440601   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.639811   20196 request.go:632] Waited for 199.133254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639908   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639919   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.639930   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.639940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.643164   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.839903   20196 request.go:632] Waited for 196.135821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839967   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839976   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.839987   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.839994   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.843566   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.844161   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.844175   20196 pod_ready.go:82] duration metric: took 403.555453ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.844208   20196 pod_ready.go:39] duration metric: took 5.406590624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:43.844253   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:33:43.844326   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:33:43.855983   20196 api_server.go:72] duration metric: took 13.755275558s to wait for apiserver process to appear ...
	I1204 15:33:43.855995   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:33:43.856010   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:33:43.860186   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:33:43.860225   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:33:43.860230   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.860243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.860246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.860683   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:33:43.860804   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:33:43.860815   20196 api_server.go:131] duration metric: took 4.815788ms to wait for apiserver health ...
	I1204 15:33:43.860824   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:33:44.038297   20196 request.go:632] Waited for 177.420142ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038399   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.038411   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.038421   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.044078   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.049007   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:33:44.049023   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.049029   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.049032   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.049034   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.049038   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.049041   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.049043   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.049046   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.049049   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.049051   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.049054   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.049056   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.049059   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.049069   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.049073   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.049075   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.049078   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.049080   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.049084   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.049087   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.049089   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.049092   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.049094   20196 system_pods.go:61] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.049097   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.049099   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.049102   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.049106   20196 system_pods.go:74] duration metric: took 188.271977ms to wait for pod list to return data ...
	I1204 15:33:44.049112   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:33:44.239205   20196 request.go:632] Waited for 190.005694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239263   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239272   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.239283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.239322   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.243527   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.243704   20196 default_sa.go:45] found service account: "default"
	I1204 15:33:44.243713   20196 default_sa.go:55] duration metric: took 194.591962ms for default service account to be created ...
	I1204 15:33:44.243719   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:33:44.439115   20196 request.go:632] Waited for 195.322716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439246   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.439258   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.439264   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.444755   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.449718   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:33:44.449733   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.449738   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.449741   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.449744   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.449748   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.449750   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.449753   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.449755   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.449758   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.449761   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.449765   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.449768   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.449771   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.449774   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.449777   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.449783   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.449786   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.449789   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.449793   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.449795   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.449798   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.449801   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.449804   20196 system_pods.go:89] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.449806   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.449810   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.449813   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.449818   20196 system_pods.go:126] duration metric: took 206.089298ms to wait for k8s-apps to be running ...
	I1204 15:33:44.449823   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:33:44.449890   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:33:44.461452   20196 system_svc.go:56] duration metric: took 11.623487ms WaitForService to wait for kubelet
	I1204 15:33:44.461466   20196 kubeadm.go:582] duration metric: took 14.360743481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:33:44.461484   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:33:44.639462   20196 request.go:632] Waited for 177.925125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639548   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.639560   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.639568   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.643595   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.644812   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644828   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644839   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644849   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644858   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644861   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644864   20196 node_conditions.go:105] duration metric: took 183.370218ms to run NodePressure ...
	I1204 15:33:44.644872   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:44.644890   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:44.665849   20196 out.go:201] 
	I1204 15:33:44.687912   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:44.688042   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.710522   20196 out.go:177] * Starting "ha-098000-m03" control-plane node in "ha-098000" cluster
	I1204 15:33:44.752466   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:44.752500   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:44.752679   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:44.752697   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:44.752830   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.753998   20196 start.go:360] acquireMachinesLock for ha-098000-m03: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:44.754068   20196 start.go:364] duration metric: took 52.377µs to acquireMachinesLock for "ha-098000-m03"
	I1204 15:33:44.754085   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:44.754091   20196 fix.go:54] fixHost starting: m03
	I1204 15:33:44.754406   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:44.754430   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:44.765918   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58653
	I1204 15:33:44.766304   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:44.766704   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:44.766719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:44.766938   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:44.767056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.767166   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetState
	I1204 15:33:44.767251   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.767322   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 19347
	I1204 15:33:44.768480   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.768517   20196 fix.go:112] recreateIfNeeded on ha-098000-m03: state=Stopped err=<nil>
	I1204 15:33:44.768528   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	W1204 15:33:44.768610   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:44.789653   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m03" ...
	I1204 15:33:44.831751   20196 main.go:141] libmachine: (ha-098000-m03) Calling .Start
	I1204 15:33:44.832023   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.832066   20196 main.go:141] libmachine: (ha-098000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid
	I1204 15:33:44.834593   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.834606   20196 main.go:141] libmachine: (ha-098000-m03) DBG | pid 19347 is in state "Stopped"
	I1204 15:33:44.834626   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid...
	I1204 15:33:44.835523   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Using UUID eac2e001-90c5-40d6-830d-b844e6baedeb
	I1204 15:33:44.861764   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Generated MAC 56:f8:e7:bc:e7:07
	I1204 15:33:44.861784   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:44.862005   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862041   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862100   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "eac2e001-90c5-40d6-830d-b844e6baedeb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:44.862139   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U eac2e001-90c5-40d6-830d-b844e6baedeb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:44.862604   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:44.864474   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Pid is 20231
	I1204 15:33:44.864862   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Attempt 0
	I1204 15:33:44.864878   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.864933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 20231
	I1204 15:33:44.866074   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Searching for 56:f8:e7:bc:e7:07 in /var/db/dhcpd_leases ...
	I1204 15:33:44.866145   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:44.866158   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:33:44.866167   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:44.866177   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:44.866182   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:33:44.866187   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found match: 56:f8:e7:bc:e7:07
	I1204 15:33:44.866193   20196 main.go:141] libmachine: (ha-098000-m03) DBG | IP: 192.169.0.7
	I1204 15:33:44.866266   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetConfigRaw
	I1204 15:33:44.866960   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:44.867187   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.867733   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:44.867748   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.867880   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:44.867991   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:44.868083   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868175   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868275   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:44.868449   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:44.868607   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:44.868615   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:44.875700   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:44.885221   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:44.886534   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:44.886590   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:44.886624   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:44.886641   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.310864   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:45.310888   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:45.426378   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:45.426408   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:45.426418   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:45.426427   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.427201   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:45.427213   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:51.200443   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:51.200513   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:51.200524   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:51.225933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:55.935290   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:55.935305   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935436   20196 buildroot.go:166] provisioning hostname "ha-098000-m03"
	I1204 15:33:55.935445   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935551   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:55.935640   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:55.935732   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935825   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935912   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:55.936073   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:55.936205   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:55.936213   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m03 && echo "ha-098000-m03" | sudo tee /etc/hostname
	I1204 15:33:56.008649   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m03
	
	I1204 15:33:56.008663   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.008821   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.008915   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009001   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009093   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.009247   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.009386   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.009397   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:56.076925   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:56.076941   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:56.076950   20196 buildroot.go:174] setting up certificates
	I1204 15:33:56.076956   20196 provision.go:84] configureAuth start
	I1204 15:33:56.076962   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:56.077121   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:56.077219   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.077318   20196 provision.go:143] copyHostCerts
	I1204 15:33:56.077346   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077405   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:56.077411   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077538   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:56.077740   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077775   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:56.077780   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077851   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:56.078007   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078036   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:56.078041   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078135   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:56.078295   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m03 san=[127.0.0.1 192.169.0.7 ha-098000-m03 localhost minikube]
	I1204 15:33:56.184360   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:56.184421   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:56.184436   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.184584   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.184682   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.184788   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.184878   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:56.222358   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:56.222423   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:56.242527   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:56.242598   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:56.262411   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:56.262492   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:33:56.282604   20196 provision.go:87] duration metric: took 205.634097ms to configureAuth
	I1204 15:33:56.282619   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:56.282802   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:56.282816   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:56.282954   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.283056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.283161   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283267   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283366   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.283498   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.283620   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.283628   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:56.345040   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:56.345053   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:56.345129   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:56.345143   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.345280   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.345367   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345443   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.345668   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.345805   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.345851   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:56.424345   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:56.424363   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.424517   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.424685   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424787   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424878   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.425031   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.425156   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.425173   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:58.122525   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:58.122539   20196 machine.go:96] duration metric: took 13.254423135s to provisionDockerMachine
	I1204 15:33:58.122547   20196 start.go:293] postStartSetup for "ha-098000-m03" (driver="hyperkit")
	I1204 15:33:58.122554   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:58.122566   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.122762   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:58.122783   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.122871   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.122946   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.123045   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.123137   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.161639   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:58.164739   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:58.164749   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:58.164831   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:58.164968   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:58.164974   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:58.165140   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:58.173027   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:58.192093   20196 start.go:296] duration metric: took 69.536473ms for postStartSetup
	I1204 15:33:58.192114   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.192306   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:58.192320   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.192414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.192509   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.192600   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.192674   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.230841   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:58.230926   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:58.265220   20196 fix.go:56] duration metric: took 13.510737637s for fixHost
	I1204 15:33:58.265271   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.265414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.265524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265620   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265713   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.265865   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:58.266013   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:58.266021   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:58.330663   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355238.486070391
	
	I1204 15:33:58.330676   20196 fix.go:216] guest clock: 1733355238.486070391
	I1204 15:33:58.330682   20196 fix.go:229] Guest: 2024-12-04 15:33:58.486070391 -0800 PST Remote: 2024-12-04 15:33:58.265237 -0800 PST m=+65.184150423 (delta=220.833391ms)
	I1204 15:33:58.330692   20196 fix.go:200] guest clock delta is within tolerance: 220.833391ms
	I1204 15:33:58.330696   20196 start.go:83] releasing machines lock for "ha-098000-m03", held for 13.576240131s
	I1204 15:33:58.330714   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.330854   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:58.352510   20196 out.go:177] * Found network options:
	I1204 15:33:58.380745   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1204 15:33:58.401983   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402013   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402029   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402504   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402654   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402766   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:58.402819   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	W1204 15:33:58.402881   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402902   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402977   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403000   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:58.403012   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.403174   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403214   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403349   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403358   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403564   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.403575   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403741   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	W1204 15:33:58.437750   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:58.437828   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:58.485243   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:58.485257   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.485329   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.514237   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:58.528266   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:58.539804   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:58.539880   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:58.555961   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.566195   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:58.575257   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.584192   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:58.593620   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:58.603021   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:58.612370   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:58.621502   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:58.630294   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:58.630368   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:58.640300   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:58.648626   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:58.742860   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:58.760057   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.760138   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:58.778296   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.793165   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:58.807402   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.818936   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.829930   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:58.849768   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.861249   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.876335   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:58.879342   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:58.887395   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:58.901271   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:59.012726   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:59.108627   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:59.108651   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:59.122518   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:59.224950   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:01.525196   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.300161441s)
	I1204 15:34:01.525275   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:34:01.537533   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:34:01.552928   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.564251   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:34:01.666308   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:34:01.762184   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.857672   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:34:01.871507   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.882955   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.972213   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:34:02.036955   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:34:02.037050   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:34:02.042796   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:34:02.042875   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:34:02.046431   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:34:02.073232   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:34:02.073324   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.089702   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.126985   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:34:02.168586   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:34:02.190567   20196 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1204 15:34:02.211577   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:34:02.211977   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:34:02.216597   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.226113   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:34:02.226314   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:02.226550   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.226577   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.238043   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58675
	I1204 15:34:02.238357   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.238749   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.238766   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.238998   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.239102   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:34:02.239217   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:02.239287   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:34:02.240505   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:34:02.240770   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.240796   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.252028   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58677
	I1204 15:34:02.252346   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.252700   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.252719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.252937   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.253032   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:34:02.253139   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.7
	I1204 15:34:02.253146   20196 certs.go:194] generating shared ca certs ...
	I1204 15:34:02.253156   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:02.253308   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:34:02.253362   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:34:02.253371   20196 certs.go:256] generating profile certs ...
	I1204 15:34:02.253468   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:34:02.253856   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.d946d3b4
	I1204 15:34:02.253925   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:34:02.253938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:34:02.253962   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:34:02.253983   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:34:02.254009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:34:02.254028   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:34:02.254046   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:34:02.254065   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:34:02.254082   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:34:02.254159   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:34:02.254203   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:34:02.254211   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:34:02.254246   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:34:02.254278   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:34:02.254310   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:34:02.254374   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:34:02.254409   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.254429   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.254447   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.254475   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:34:02.254562   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:34:02.254640   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:34:02.254716   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:34:02.254794   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:34:02.285982   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:34:02.289453   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:34:02.298834   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:34:02.302369   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:34:02.315418   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:34:02.318593   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:34:02.327312   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:34:02.330564   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:34:02.339456   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:34:02.342515   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:34:02.351231   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:34:02.354286   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:34:02.363156   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:34:02.384838   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:34:02.405926   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:34:02.426535   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:34:02.446742   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:34:02.466560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:34:02.486853   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:34:02.507184   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:34:02.528073   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:34:02.548964   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:34:02.569347   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:34:02.589426   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:34:02.603866   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:34:02.617657   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:34:02.631813   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:34:02.645494   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:34:02.659961   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:34:02.673777   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:34:02.687446   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:34:02.691739   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:34:02.700420   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.703973   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.704042   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.708497   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:34:02.717646   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:34:02.726542   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.729989   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.730041   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.734277   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:34:02.742686   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:34:02.751027   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754461   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754515   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.758843   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:34:02.767465   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:34:02.770903   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:34:02.776086   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:34:02.780679   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:34:02.785121   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:34:02.789654   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:34:02.794116   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:34:02.798756   20196 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1204 15:34:02.798834   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:34:02.798851   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:34:02.798902   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:34:02.811676   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:34:02.811716   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:34:02.811802   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:34:02.820056   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:34:02.820120   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:34:02.827634   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:34:02.840903   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:34:02.854283   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:34:02.867957   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:34:02.870915   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.880410   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:02.978715   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:02.992761   20196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:34:02.992956   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:03.013320   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:34:03.055094   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:03.162591   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:03.175308   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:34:03.175517   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:34:03.175556   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:34:03.175722   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.175774   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:03.175780   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.175788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.175793   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.177877   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.178182   20196 node_ready.go:49] node "ha-098000-m03" has status "Ready":"True"
	I1204 15:34:03.178191   20196 node_ready.go:38] duration metric: took 2.460684ms for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.178204   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:03.178249   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:03.178255   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.178261   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.178265   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.181589   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:03.187858   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:03.187917   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.187923   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.187928   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.187931   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.190071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.190536   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.190544   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.190550   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.190553   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.192357   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:03.689890   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.689913   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.689960   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.689970   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.692722   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.693137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.693145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.693150   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.693154   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.694862   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.188595   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.188612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.188618   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.188622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.190926   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.191442   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.191451   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.191457   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.191460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.193377   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.689410   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.689427   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.689433   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.689436   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.691829   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.692311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.692320   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.692326   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.692329   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.694756   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.188051   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.188069   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.188075   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.188079   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.190537   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.191234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.191244   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.191250   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.191254   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.193184   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:05.193754   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:05.689571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.689583   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.689589   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.689592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.692119   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.693045   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.693054   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.693060   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.693070   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.695078   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.188182   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.188196   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.188203   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.188206   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.190803   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:06.191335   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.191343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.191353   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.191358   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.193354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.688125   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.688144   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.688150   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.688153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.698567   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:34:06.699659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.699669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.699674   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.699678   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.702231   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.188129   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.188142   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.188149   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.188152   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.190314   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.190783   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.190793   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.190799   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.190803   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.192721   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.689429   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.689444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.689450   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.689453   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.691383   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.691809   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.691816   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.691822   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.691827   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.693593   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.693894   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:08.189338   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.189353   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.189361   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.189365   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.191565   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.192110   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.192118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.192124   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.192134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.193879   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:08.689140   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.689155   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.689194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.689198   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.691672   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.692190   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.692197   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.692203   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.692206   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.694257   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.189377   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.189396   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.189399   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.191765   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.192318   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.192326   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.192333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.192337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.194226   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:09.688422   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.688435   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.688441   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.688445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.690918   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.691538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.691546   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.691552   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.691556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.693405   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.188400   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.188426   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.188438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.188445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.191226   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.191923   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.191930   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.191936   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.191940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.193682   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.194054   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:10.689544   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.689566   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.689601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.689607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.692171   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.692830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.692842   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.692848   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.692852   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.694354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:11.188970   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.188983   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.188989   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.188992   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.193348   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:11.193835   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.193844   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.193850   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.193854   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.195899   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.688737   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.688752   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.688758   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.688761   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691007   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.691483   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.691491   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.691496   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691500   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.693198   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.188889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.188972   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.188986   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.188999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.192039   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:12.192581   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.192589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.192595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.192598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.194300   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.194673   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:12.688761   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.688869   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.688880   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.688888   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.691475   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:12.692022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.692029   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.692035   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.692039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.693737   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.190399   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.190424   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.190436   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.190441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.193795   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:13.194709   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.194717   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.194722   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.194725   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.196228   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.688349   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.688361   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.688367   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.688370   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.690278   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.690775   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.690783   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.690788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.690792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.692350   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.189443   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.189461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.189470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.189474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.191713   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:14.192328   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.192336   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.192341   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.192345   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.194132   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.689369   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.689471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.689487   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.689522   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.693058   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:14.693755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.693762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.693768   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.693771   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.695478   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.695986   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:15.189753   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.189777   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.189833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.189842   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.193300   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:15.193825   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.193835   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.193842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.193848   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.195490   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:15.688564   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.688589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.688600   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.688607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.691559   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:15.692137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.692145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.692152   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.692156   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.693792   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.188974   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.188991   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.188999   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.189003   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.191876   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.192266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.192273   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.192279   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.192283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.193909   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.689589   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.689601   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.689607   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.689609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.691735   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.692340   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.692348   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.692354   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.692364   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.694139   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.188693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.188719   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.188730   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.188737   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.192306   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.192880   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.192888   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.192893   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.192896   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.194607   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.194930   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:17.689803   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.689822   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.689833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.689840   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.692900   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.693582   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.693596   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.693600   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.695568   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.189872   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.189891   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.189903   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.189909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.193143   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.193659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.193669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.193677   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.193682   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.195539   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.689089   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.689110   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.689121   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.689128   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.692465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.693092   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.693099   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.693105   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.693109   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.694811   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.188836   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.188866   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.188885   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.188893   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191083   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:19.191481   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.191489   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.191494   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191498   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.193210   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.688920   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.689019   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.689034   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.689040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692204   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:19.692887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.692895   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.692901   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.694482   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.694834   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:20.189463   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.189482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.189495   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.189507   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.192820   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.193489   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.193497   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.193503   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.193506   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.195170   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:20.689312   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.689335   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.689345   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.689353   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.692898   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.693406   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.693413   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.693419   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.693435   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.695237   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.189479   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.189499   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.189511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.189519   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.192490   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.193119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.193127   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.193132   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.193136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.194670   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.689574   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.689589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.689595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.689598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.691684   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.692133   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.692140   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.692145   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.692156   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.694020   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.189311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.189327   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.189334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.189337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.191942   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.192424   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.192432   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.192438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.192441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.194080   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.194500   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:22.689269   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.689284   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.689293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.689297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.691724   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.692389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.692397   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.692404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.692407   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.694417   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.188903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.188937   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.188944   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.188948   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.191281   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.191769   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.191776   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.191783   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.191786   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.193597   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.689658   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.689673   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.689682   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.689688   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.692154   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.692597   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.692605   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.692611   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.692614   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.694442   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.190414   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.190439   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.190448   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.190453   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.193694   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:24.194336   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.194343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.194349   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.194352   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.196204   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.196507   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:24.689283   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.689324   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.689334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.689339   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.691786   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:24.692252   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.692260   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.692265   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.692269   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.694045   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.189972   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.189988   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.189995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.189997   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.192150   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.192590   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.192598   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.192604   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.192607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.194554   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.689840   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.689893   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.689902   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.689908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.692432   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.693530   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.693539   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.693545   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.693556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.695085   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.188685   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.188774   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.188787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.188792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.191478   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.191981   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.191990   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.191995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.191998   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.193972   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.689955   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.690060   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.690076   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.690084   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693025   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.693583   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.693596   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.695193   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.695569   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:27.190057   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.190079   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.190096   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.190102   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.193105   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:27.193849   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.193860   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.193868   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.193873   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.195538   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:27.688758   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.688772   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.688779   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.688783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.694666   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:27.695270   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.695278   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.695283   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.695288   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.696913   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.188770   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.188819   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.188832   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.188840   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.191808   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.192403   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.192411   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.192416   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.192420   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.194136   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.689405   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.689487   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.689503   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.689511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.694694   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:28.695230   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.695237   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.695243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.695246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.697820   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.698133   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:29.190106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.190125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.190138   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.190143   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.193071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:29.193687   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.193698   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.193706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.193711   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.195444   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:29.689830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.689849   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.689862   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.689867   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.692977   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:29.693745   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.693753   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.693759   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.693762   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.695525   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.190945   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.190965   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.190976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.190988   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.195195   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:30.195850   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.195859   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.195865   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.195869   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.197592   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.689476   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.689500   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.689510   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.689516   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.692808   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:30.693458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.693466   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.693471   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.693474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.695140   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.189274   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.189404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.189413   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.192545   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.193168   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.193179   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.193186   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.193193   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.194805   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.195157   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:31.690066   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.690125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.690139   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.693489   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.694073   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.694084   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.694093   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.694098   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.695789   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.190294   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.190315   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.190333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193258   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:32.193839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.193846   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.193852   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193856   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.195470   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.689113   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.689137   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.689148   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.689153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.692269   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:32.692828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.692836   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.692842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.692845   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.694429   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.188950   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.188969   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.188980   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.188987   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.191891   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:33.192381   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.192389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.192395   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.192400   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.194337   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.690112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.690134   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.690145   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.690153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.693581   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:33.694215   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.694223   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.694229   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.694232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.696177   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.696454   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:34.189881   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.189900   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.189912   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.189918   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193287   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.193886   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.193897   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.193909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193915   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.195881   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:34.689892   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.689916   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.689931   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.689940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.693606   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.694219   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.694227   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.694234   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.694237   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.696105   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.188973   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.189024   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.189039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.189046   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192172   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.192755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.192763   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.192769   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192772   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.194518   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.690180   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.690201   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.690214   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.690223   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.694006   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.694605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.694612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.694619   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.694622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.696235   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.696565   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:36.189741   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.189767   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.189779   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.189785   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.193344   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.194036   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.194047   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.194055   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.194059   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.195836   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:36.690199   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.690224   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.690236   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.690241   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.693462   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.694091   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.694102   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.694110   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.694116   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.695766   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:37.190287   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.190309   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.190320   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.196511   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:34:37.197043   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.197052   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.197058   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.197061   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.199818   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.690095   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.690118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.690129   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.690136   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.693801   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:37.694618   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.694626   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.694632   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.694636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.696670   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.697007   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:38.190293   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.190317   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.190329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.190338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.194628   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:38.195183   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.195190   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.195196   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.195201   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.197386   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:38.689866   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.689889   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.689900   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.689905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.693601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:38.694401   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.694412   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.694420   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.694426   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.696297   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:39.190990   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.191012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.191024   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.191031   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.198155   20196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 15:34:39.199473   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.199482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.199488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.199493   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.205055   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:39.690106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.690130   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.690142   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.693615   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:39.694445   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.694452   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.694458   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.694462   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.696222   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.189693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:40.189718   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.189731   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.189746   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.193370   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:40.194004   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.194012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.194018   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.194021   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.195604   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.195934   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:40.195944   20196 pod_ready.go:82] duration metric: took 37.007028934s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195952   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195984   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.195989   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.195995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.195999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.197711   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.198120   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.198128   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.198134   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.198136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.199690   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.696200   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.696219   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.696228   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.696232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.698719   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:40.699262   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.699270   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.699277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.699281   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.701563   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.196423   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.196440   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.196446   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.196449   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.199972   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:41.200435   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.200444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.200449   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.200454   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.202156   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:41.696302   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.696325   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.696334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.696376   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.698859   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.699465   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.699474   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.699480   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.699486   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.701569   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.197903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.197925   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.197937   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.197942   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.200867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.201412   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.201420   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.201427   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.201431   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.203130   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.203467   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:42.697162   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.697182   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.697194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.697200   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.700051   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.700562   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.700570   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.700576   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.700579   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.702701   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.703063   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.703073   20196 pod_ready.go:82] duration metric: took 2.507044671s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703080   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703116   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:34:42.703121   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.703129   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.703134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.705021   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.705585   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.705592   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.705598   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.705609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.707581   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.708069   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.708079   20196 pod_ready.go:82] duration metric: took 4.993321ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708086   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708121   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:34:42.708126   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.708131   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.708135   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.710061   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.710514   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:42.710522   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.710528   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.710532   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712173   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.712569   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.712578   20196 pod_ready.go:82] duration metric: took 4.485807ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712584   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712616   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:34:42.712621   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.712627   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712630   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714463   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.714960   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:42.714968   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.714976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714980   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.716756   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.717063   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.717072   20196 pod_ready.go:82] duration metric: took 4.482301ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717082   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:34:42.717116   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.717122   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.717126   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.718813   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.719178   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.719186   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.719192   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.719196   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.720739   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.721127   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.721135   20196 pod_ready.go:82] duration metric: took 4.047168ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.721141   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.898812   20196 request.go:632] Waited for 177.546709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898865   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898875   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.898884   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.898890   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.901957   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.097426   20196 request.go:632] Waited for 194.940606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097482   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097488   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.097494   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.097498   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.099791   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.100329   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.100338   20196 pod_ready.go:82] duration metric: took 379.181132ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.100345   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.297467   20196 request.go:632] Waited for 197.060564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297531   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297536   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.297542   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.297546   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.299888   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.498066   20196 request.go:632] Waited for 197.627847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498144   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498154   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.498165   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.498171   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.501495   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.501933   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.501946   20196 pod_ready.go:82] duration metric: took 401.584296ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.501955   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.697541   20196 request.go:632] Waited for 195.539974ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697609   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697614   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.697620   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.697624   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.699660   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.897896   20196 request.go:632] Waited for 197.715706ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897988   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897999   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.898011   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.898040   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.901116   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.901493   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.901504   20196 pod_ready.go:82] duration metric: took 399.531331ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.901511   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.097961   20196 request.go:632] Waited for 196.319346ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098031   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.098043   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.098052   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.101549   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.297607   20196 request.go:632] Waited for 195.557496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297743   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297756   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.297766   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.297776   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.301215   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.301821   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.301835   20196 pod_ready.go:82] duration metric: took 400.304316ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.301844   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.497418   20196 request.go:632] Waited for 195.52419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497540   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497551   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.497561   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.497567   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.500605   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.697803   20196 request.go:632] Waited for 196.768583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697874   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697880   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.697886   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.697892   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.699791   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:44.700181   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.700191   20196 pod_ready.go:82] duration metric: took 398.331303ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.700206   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.897582   20196 request.go:632] Waited for 197.274481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897621   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897628   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.897636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.897643   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.899968   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.098303   20196 request.go:632] Waited for 197.936546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.098481   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.098489   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.101906   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.102405   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.102418   20196 pod_ready.go:82] duration metric: took 402.19463ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.102429   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.297787   20196 request.go:632] Waited for 195.298622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297896   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297908   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.297918   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.297924   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.301224   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.497743   20196 request.go:632] Waited for 195.731374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497789   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497798   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.497808   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.497816   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.501296   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.501752   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.501764   20196 pod_ready.go:82] duration metric: took 399.314475ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.501772   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.698321   20196 request.go:632] Waited for 196.486057ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698364   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698368   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.698395   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.698400   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.700678   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.898338   20196 request.go:632] Waited for 197.154497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898437   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898445   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.898454   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.898460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.900811   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.901113   20196 pod_ready.go:98] node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901124   20196 pod_ready.go:82] duration metric: took 399.323564ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	E1204 15:34:45.901130   20196 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901136   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.098348   20196 request.go:632] Waited for 197.16954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098411   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098417   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.098423   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.098428   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.100807   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.298026   20196 request.go:632] Waited for 196.74762ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298086   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298092   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.298098   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.298103   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.300358   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.300719   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.300729   20196 pod_ready.go:82] duration metric: took 399.576022ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.300737   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.497896   20196 request.go:632] Waited for 197.086517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.497983   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.498051   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.498063   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.498071   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.501601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:46.698084   20196 request.go:632] Waited for 195.78543ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.698170   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.698177   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.700251   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.700719   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.700729   20196 pod_ready.go:82] duration metric: took 399.975386ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.700736   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.898629   20196 request.go:632] Waited for 197.83339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898748   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.898773   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.898783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.902413   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.099363   20196 request.go:632] Waited for 196.494986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099466   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099477   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.099488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.099495   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.102564   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.102986   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.102995   20196 pod_ready.go:82] duration metric: took 402.242621ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.103002   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.297846   20196 request.go:632] Waited for 194.795128ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297939   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.297949   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.297953   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.300484   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.498216   20196 request.go:632] Waited for 197.302267ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498358   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.498374   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.498381   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.501722   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.502017   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.502028   20196 pod_ready.go:82] duration metric: took 399.008512ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.502037   20196 pod_ready.go:39] duration metric: took 44.322579822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:47.502061   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:34:47.502149   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:47.513881   20196 api_server.go:72] duration metric: took 44.519844285s to wait for apiserver process to appear ...
	I1204 15:34:47.513892   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:34:47.513909   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:34:47.516967   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:34:47.517003   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:34:47.517008   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.517014   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.517018   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.517533   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:34:47.517562   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:34:47.517569   20196 api_server.go:131] duration metric: took 3.673154ms to wait for apiserver health ...
	I1204 15:34:47.517575   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:34:47.697569   20196 request.go:632] Waited for 179.954091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697611   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.697617   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.697621   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.702548   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:47.707779   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:34:47.707791   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:47.707795   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:47.707798   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:47.707801   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:47.707809   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:47.707813   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:47.707815   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:47.707818   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:47.707821   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:47.707823   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:47.707826   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:47.707830   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:47.707837   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:47.707841   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:47.707844   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:47.707846   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:47.707849   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:47.707851   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:47.707854   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:47.707857   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:47.707860   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:47.707862   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:47.707865   20196 system_pods.go:61] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:47.707867   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:47.707870   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:47.707874   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:47.707879   20196 system_pods.go:74] duration metric: took 190.294933ms to wait for pod list to return data ...
	I1204 15:34:47.707885   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:34:47.897357   20196 request.go:632] Waited for 189.411036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897446   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897455   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.897463   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.897470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.899736   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.899815   20196 default_sa.go:45] found service account: "default"
	I1204 15:34:47.899824   20196 default_sa.go:55] duration metric: took 191.920936ms for default service account to be created ...
	I1204 15:34:47.899831   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:34:48.097563   20196 request.go:632] Waited for 197.602094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097612   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097620   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.097663   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.097675   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.102765   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:48.109211   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:34:48.109362   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:48.109371   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:48.109375   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:48.109379   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:48.109383   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:48.109386   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:48.109389   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:48.109393   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:48.109396   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:48.109400   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:48.109403   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:48.109406   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:48.109409   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:48.109413   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:48.109417   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:48.109419   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:48.109422   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:48.109425   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:48.109428   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:48.109431   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:48.109434   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:48.109437   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:48.109439   20196 system_pods.go:89] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:48.109442   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:48.109445   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:48.109450   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:48.109455   20196 system_pods.go:126] duration metric: took 209.614349ms to wait for k8s-apps to be running ...
	I1204 15:34:48.109461   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:34:48.109531   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:34:48.120276   20196 system_svc.go:56] duration metric: took 10.810365ms WaitForService to wait for kubelet
	I1204 15:34:48.120291   20196 kubeadm.go:582] duration metric: took 45.126238068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:34:48.120303   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:34:48.297415   20196 request.go:632] Waited for 177.05913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297455   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.297469   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.297475   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.300123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:48.300830   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300840   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300847   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300850   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300860   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300862   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300866   20196 node_conditions.go:105] duration metric: took 180.554037ms to run NodePressure ...
	I1204 15:34:48.300874   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:34:48.300889   20196 start.go:255] writing updated cluster config ...
	I1204 15:34:48.322431   20196 out.go:201] 
	I1204 15:34:48.344449   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:48.344580   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.367119   20196 out.go:177] * Starting "ha-098000-m04" worker node in "ha-098000" cluster
	I1204 15:34:48.409090   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:34:48.409115   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:34:48.409244   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:34:48.409257   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:34:48.409347   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.410058   20196 start.go:360] acquireMachinesLock for ha-098000-m04: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:34:48.410126   20196 start.go:364] duration metric: took 51.472µs to acquireMachinesLock for "ha-098000-m04"
	I1204 15:34:48.410144   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:34:48.410150   20196 fix.go:54] fixHost starting: m04
	I1204 15:34:48.410455   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:48.410480   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:48.421860   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58681
	I1204 15:34:48.422147   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:48.422522   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:48.422541   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:48.422736   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:48.422817   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.422956   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:34:48.423067   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.423135   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 19762
	I1204 15:34:48.424293   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid 19762 missing from process table
	I1204 15:34:48.424344   20196 fix.go:112] recreateIfNeeded on ha-098000-m04: state=Stopped err=<nil>
	I1204 15:34:48.424356   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:34:48.424441   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:34:48.445040   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m04" ...
	I1204 15:34:48.535157   20196 main.go:141] libmachine: (ha-098000-m04) Calling .Start
	I1204 15:34:48.535373   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.535405   20196 main.go:141] libmachine: (ha-098000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid
	I1204 15:34:48.535476   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Using UUID 8502617a-13a7-430f-a6ae-7be776245ae1
	I1204 15:34:48.565169   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Generated MAC 7a:59:49:d0:f8:66
	I1204 15:34:48.565217   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:34:48.565376   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565411   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565471   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8502617a-13a7-430f-a6ae-7be776245ae1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:34:48.565528   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8502617a-13a7-430f-a6ae-7be776245ae1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:34:48.565552   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:34:48.566902   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Pid is 20252
	I1204 15:34:48.567481   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Attempt 0
	I1204 15:34:48.567496   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.567619   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:34:48.570453   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Searching for 7a:59:49:d0:f8:66 in /var/db/dhcpd_leases ...
	I1204 15:34:48.570536   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:34:48.570551   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f4f2}
	I1204 15:34:48.570574   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:34:48.570588   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:34:48.570605   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:34:48.570615   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found match: 7a:59:49:d0:f8:66
	I1204 15:34:48.570625   20196 main.go:141] libmachine: (ha-098000-m04) DBG | IP: 192.169.0.8
	I1204 15:34:48.570635   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetConfigRaw
	I1204 15:34:48.571737   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:48.571957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.572535   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:34:48.572555   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.572720   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:48.572824   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:48.572944   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573100   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573236   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:48.573428   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:48.573574   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:48.573582   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:34:48.578618   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:34:48.587514   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:34:48.588773   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:48.588818   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:48.588867   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:48.588887   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.021227   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:34:49.021251   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:34:49.136078   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:49.136099   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:49.136106   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:49.136115   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.136921   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:34:49.136930   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:34:54.890690   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:34:54.890729   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:34:54.890737   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:34:54.916069   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:34:59.632189   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:34:59.632205   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632363   20196 buildroot.go:166] provisioning hostname "ha-098000-m04"
	I1204 15:34:59.632375   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632472   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.632554   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.632630   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632721   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632816   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.633517   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.633682   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.633692   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m04 && echo "ha-098000-m04" | sudo tee /etc/hostname
	I1204 15:34:59.697622   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m04
	
	I1204 15:34:59.697639   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.697775   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.697886   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.697981   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.698057   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.698172   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.698298   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.698309   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:34:59.757369   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:59.757388   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:34:59.757401   20196 buildroot.go:174] setting up certificates
	I1204 15:34:59.757413   20196 provision.go:84] configureAuth start
	I1204 15:34:59.757421   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.757593   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:59.757706   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.757790   20196 provision.go:143] copyHostCerts
	I1204 15:34:59.757821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.757873   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:34:59.757878   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.758004   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:34:59.758235   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758271   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:34:59.758277   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758377   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:34:59.758555   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758595   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:34:59.758601   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758673   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:34:59.758840   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m04 san=[127.0.0.1 192.169.0.8 ha-098000-m04 localhost minikube]
	I1204 15:35:00.089781   20196 provision.go:177] copyRemoteCerts
	I1204 15:35:00.090065   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:35:00.090090   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.090250   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.090364   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.090440   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.090527   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:00.124202   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:35:00.124273   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:35:00.161213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:35:00.161289   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:35:00.180684   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:35:00.180757   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:35:00.200255   20196 provision.go:87] duration metric: took 442.820652ms to configureAuth
	I1204 15:35:00.200272   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:35:00.201095   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:35:00.201110   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:00.201255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.201346   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.201433   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201525   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201613   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.201739   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.201862   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.201869   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:35:00.254941   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:35:00.254954   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:35:00.255043   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:35:00.255055   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.255192   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.255284   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255363   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255444   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.255591   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.255723   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.255769   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:35:00.320168   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:35:00.320186   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.320331   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.320425   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320520   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320607   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.320759   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.320905   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.320920   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:35:01.894648   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:35:01.894665   20196 machine.go:96] duration metric: took 13.321743335s to provisionDockerMachine
	I1204 15:35:01.894674   20196 start.go:293] postStartSetup for "ha-098000-m04" (driver="hyperkit")
	I1204 15:35:01.894686   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:35:01.894699   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.894901   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:35:01.894920   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.895018   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.895119   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.895219   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.895309   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.930531   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:35:01.933734   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:35:01.933745   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:35:01.933830   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:35:01.934221   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:35:01.934229   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:35:01.934400   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:35:01.942635   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:35:01.962080   20196 start.go:296] duration metric: took 67.394691ms for postStartSetup
	I1204 15:35:01.962104   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.962295   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:35:01.962307   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.962392   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.962474   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.962566   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.962648   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.996347   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:35:01.996427   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:35:02.030032   20196 fix.go:56] duration metric: took 13.619496662s for fixHost
	I1204 15:35:02.030058   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.030197   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.030296   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030393   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030479   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.030637   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:02.030806   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:02.030817   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:35:02.085147   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355302.120673328
	
	I1204 15:35:02.085159   20196 fix.go:216] guest clock: 1733355302.120673328
	I1204 15:35:02.085164   20196 fix.go:229] Guest: 2024-12-04 15:35:02.120673328 -0800 PST Remote: 2024-12-04 15:35:02.030047 -0800 PST m=+128.947170547 (delta=90.626328ms)
	I1204 15:35:02.085182   20196 fix.go:200] guest clock delta is within tolerance: 90.626328ms
	I1204 15:35:02.085188   20196 start.go:83] releasing machines lock for "ha-098000-m04", held for 13.674670433s
	I1204 15:35:02.085206   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.085349   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:35:02.123833   20196 out.go:177] * Found network options:
	I1204 15:35:02.144638   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W1204 15:35:02.165506   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165534   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165554   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.165573   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166172   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166326   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:35:02.166492   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166507   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166517   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.166609   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:35:02.166623   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.166758   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.166911   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167036   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167085   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:35:02.167112   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.167158   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:02.167255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.167389   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167508   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167638   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	W1204 15:35:02.202034   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:35:02.202111   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:35:02.250167   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:35:02.250181   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.250263   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.264522   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:35:02.273699   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:35:02.283110   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.283199   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:35:02.292318   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.301397   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:35:02.310459   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.319592   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:35:02.328805   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:35:02.338084   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:35:02.347336   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:35:02.356538   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:35:02.364640   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:35:02.364708   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:35:02.374467   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:35:02.382987   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.482753   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:35:02.500374   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.500464   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:35:02.521212   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.537841   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:35:02.556887   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.568330   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.579634   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:35:02.599962   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.611341   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.627983   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:35:02.630940   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:35:02.638934   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:35:02.652587   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:35:02.752578   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:35:02.855546   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.855575   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:35:02.869623   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.966924   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:36:03.915497   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.946841873s)
	I1204 15:36:03.916405   20196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1204 15:36:03.950956   20196 out.go:201] 
	W1204 15:36:03.971878   20196 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:35:00 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640232708Z" level=info msg="Starting up"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640913001Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.641520029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.659694182Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677007859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677106781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677181167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677217787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677508761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677564998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677718553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677761182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677794548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677829672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677979478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.678361377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.679991465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680045979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680192561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680239332Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680562445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680612744Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684019168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684126285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684179264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684280902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684315598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684384845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684662040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684780718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684823731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684856490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684888664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684919549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684954923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684987161Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685018887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685064260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685101516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685133834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685178048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685213190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685243893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685277956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685310825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685342262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685371807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685438293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685477655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685510785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685541139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685570835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685612124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685654983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685694239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685725951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685757256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685828769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685873022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686013280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686053930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686084541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686114731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686150092Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686396292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686486749Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686550930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686589142Z" level=info msg="containerd successfully booted in 0.028291s"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.663269012Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.685002759Z" level=info msg="Loading containers: start."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.779781751Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.847897599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.892577077Z" level=info msg="Loading containers: done."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902420090Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902480737Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902498001Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902856617Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925683807Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925904543Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:35:01 ha-098000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.029030705Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.030916905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031062918Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031129826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031209544Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:35:03 ha-098000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 dockerd[1154]: time="2024-12-04T23:35:04.084800926Z" level=info msg="Starting up"
	Dec 04 23:36:04 ha-098000-m04 dockerd[1154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1204 15:36:03.971971   20196 out.go:270] * 
	W1204 15:36:03.973111   20196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:04.052589   20196 out.go:201] 
	
	
	==> Docker <==
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485304530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485378415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485392381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485468152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488796991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488864892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489011132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489159283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505144687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505534782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505591473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.506131239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584088576Z" level=info msg="shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584482016Z" level=warning msg="cleaning up after shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1151]: time="2024-12-04T23:34:37.584644745Z" level=info msg="ignoring event" container=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584822280Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.596833687Z" level=warning msg="cleanup warnings time=\"2024-12-04T23:34:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.455018691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456263444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456323640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456579989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463132287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463712435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463780797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.464133475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5e57abd3c0726       6e38f40d628db                                                                                         9 seconds ago        Running             storage-provisioner       2                   85942c1ee0c48       storage-provisioner
	274afa9228625       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   566b4c12aa8e2       coredns-7c65d6cfc9-2z7lq
	9260f06aa6160       9ca7e41918271                                                                                         2 minutes ago        Running             kindnet-cni               1                   1784deace7582       kindnet-c9zw7
	3aa9f0074ad24       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   ee7fb852b0746       busybox-7dff88458-tkk5l
	a4f10e7a31b1e       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   9544aac6431ee       coredns-7c65d6cfc9-75cm5
	4d500c5582d7e       505d571f5fd56                                                                                         2 minutes ago        Running             kube-proxy                1                   e007c09acabae       kube-proxy-9strn
	59729ff8ece5d       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   85942c1ee0c48       storage-provisioner
	06090b0373c28       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      1                   492043398c8f7       etcd-ha-098000
	832c9a15fccb2       847c7bc1a5418                                                                                         2 minutes ago        Running             kube-scheduler            1                   85cb9204adcbc       kube-scheduler-ha-098000
	28b6bc3009d9a       4b34defda8067                                                                                         2 minutes ago        Running             kube-vip                  0                   092f7a958b993       kube-vip-ha-098000
	3fbffe6ec740e       0486b6c53a1b5                                                                                         2 minutes ago        Running             kube-controller-manager   1                   d3d303d826e70       kube-controller-manager-ha-098000
	d11a51451327e       9499c9960544e                                                                                         2 minutes ago        Running             kube-apiserver            1                   2e4b3bead8edd       kube-apiserver-ha-098000
	91698004f45ac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   7e62e6836673c       busybox-7dff88458-tkk5l
	334347c0146ff       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   106dba456980c       coredns-7c65d6cfc9-75cm5
	d45b7ca2c321b       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   0af8351fa9e0d       coredns-7c65d6cfc9-2z7lq
	fdb9e4f5e8f3d       kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16              8 minutes ago        Exited              kindnet-cni               0                   9933ca421eee5       kindnet-c9zw7
	12aba82bb9eef       505d571f5fd56                                                                                         8 minutes ago        Exited              kube-proxy                0                   1d340d81fbfb5       kube-proxy-9strn
	542f42367b5c6       0486b6c53a1b5                                                                                         8 minutes ago        Exited              kube-controller-manager   0                   05f42a6061648       kube-controller-manager-ha-098000
	1a5a6b8eb38ec       847c7bc1a5418                                                                                         8 minutes ago        Exited              kube-scheduler            0                   08cbd5b0cfe57       kube-scheduler-ha-098000
	347bf5bfb2fe6       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      0                   b7d6e2da744bd       etcd-ha-098000
	671e22f525950       9499c9960544e                                                                                         8 minutes ago        Exited              kube-apiserver            0                   81c0cf31c7e46       kube-apiserver-ha-098000
	
	
	==> coredns [274afa922862] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44993 - 12483 "HINFO IN 5217430967915220008.4602414331418196309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026715635s
	
	
	==> coredns [334347c0146f] <==
	[INFO] 10.244.0.4:55981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000158655s
	[INFO] 10.244.0.4:42290 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098463s
	[INFO] 10.244.0.4:58242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058466s
	[INFO] 10.244.0.4:37059 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090224s
	[INFO] 10.244.3.2:34052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150969s
	[INFO] 10.244.3.2:48314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000048987s
	[INFO] 10.244.3.2:47597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004272s
	[INFO] 10.244.3.2:43130 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042338s
	[INFO] 10.244.3.2:40288 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040487s
	[INFO] 10.244.1.2:41974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126535s
	[INFO] 10.244.0.4:46586 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136951s
	[INFO] 10.244.3.2:49834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132271s
	[INFO] 10.244.3.2:35105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007496s
	[INFO] 10.244.3.2:46872 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103202s
	[INFO] 10.244.3.2:51001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043782s
	[INFO] 10.244.1.2:60852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151622s
	[INFO] 10.244.1.2:45169 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00010811s
	[INFO] 10.244.0.4:50794 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179037s
	[INFO] 10.244.0.4:33885 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091089s
	[INFO] 10.244.0.4:59078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080787s
	[INFO] 10.244.0.4:47967 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000331118s
	[INFO] 10.244.3.2:37401 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005625s
	[INFO] 10.244.3.2:58299 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00008056s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4f10e7a31b1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56042 - 51130 "HINFO IN 4860731135473207728.3302970177185641581. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.195382352s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[898011711]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30003ms):
	Trace[898011711]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[898011711]: [30.003839217s] [30.003839217s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[451941860]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.766) (total time: 30002ms):
	Trace[451941860]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:34:38.768)
	Trace[451941860]: [30.00227073s] [30.00227073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[957834387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30004ms):
	Trace[957834387]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[957834387]: [30.004945433s] [30.004945433s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d45b7ca2c321] <==
	[INFO] 10.244.1.2:58995 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.045573822s
	[INFO] 10.244.0.4:47628 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000074289s
	[INFO] 10.244.0.4:33651 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000655957s
	[INFO] 10.244.0.4:59923 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000433816s
	[INFO] 10.244.1.2:47489 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094853s
	[INFO] 10.244.1.2:60918 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109867s
	[INFO] 10.244.0.4:58795 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102995s
	[INFO] 10.244.0.4:56882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100778s
	[INFO] 10.244.0.4:41069 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155649s
	[INFO] 10.244.0.4:47261 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005694s
	[INFO] 10.244.3.2:57069 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00065513s
	[INFO] 10.244.3.2:45549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000047282s
	[INFO] 10.244.3.2:44245 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103531s
	[INFO] 10.244.1.2:39311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122238s
	[INFO] 10.244.1.2:35593 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174598s
	[INFO] 10.244.1.2:45158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007291s
	[INFO] 10.244.0.4:35211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106877s
	[INFO] 10.244.0.4:54591 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089769s
	[INFO] 10.244.0.4:59162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036611s
	[INFO] 10.244.1.2:49523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134823s
	[INFO] 10.244.1.2:54333 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139019s
	[INFO] 10.244.3.2:46351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095077s
	[INFO] 10.244.3.2:33059 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046925s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T15_27_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:27:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:28:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d2318e94e39401090f7022df3a380b0
	  System UUID:                70104c46-0000-0000-9279-8221d5ed18af
	  Boot ID:                    637a375b-a691-4a3e-8b6f-369766d12741
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tkk5l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 coredns-7c65d6cfc9-2z7lq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m27s
	  kube-system                 coredns-7c65d6cfc9-75cm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m27s
	  kube-system                 etcd-ha-098000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m34s
	  kube-system                 kindnet-c9zw7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m28s
	  kube-system                 kube-apiserver-ha-098000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-controller-manager-ha-098000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-9strn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-ha-098000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-vip-ha-098000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m26s                  kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m38s (x8 over 8m39s)  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m38s (x8 over 8m39s)  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m38s (x7 over 8m39s)  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m32s                  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m32s                  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m32s                  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m32s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m29s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  NodeReady                8m8s                   kubelet          Node ha-098000 status is now: NodeReady
	  Normal  RegisteredNode           7m24s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  Starting                 3m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m6s (x8 over 3m6s)    kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x8 over 3m6s)    kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x7 over 3m6s)    kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m36s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           2m35s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	
	
	Name:               ha-098000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_28_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:29:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-098000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 050a31912ec64c378c8000c9ffa16f74
	  System UUID:                2486449a-0000-0000-8055-5ee234f7d16f
	  Boot ID:                    90b90eed-fa44-41ea-9bc0-c9160a359639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvhj6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 etcd-ha-098000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m30s
	  kube-system                 kindnet-w7mbs                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-apiserver-ha-098000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-controller-manager-ha-098000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-proxy-8dv6r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-scheduler-ha-098000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-vip-ha-098000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m28s                  kube-proxy       
	  Normal   Starting                 2m31s                  kube-proxy       
	  Normal   Starting                 4m8s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  7m33s (x8 over 7m33s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     7m33s (x7 over 7m33s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m33s (x8 over 7m33s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     7m32s                  cidrAllocator    Node ha-098000-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           7m29s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           6m8s                   node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 4m13s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m12s                  kubelet          Node ha-098000-m02 has been rebooted, boot id: 68d7d994-2a07-4139-8dc9-8d63e0527a5a
	  Normal   NodeHasSufficientMemory  4m12s                  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m12s                  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m12s                  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x7 over 2m47s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	
	
	Name:               ha-098000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_30_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:30:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:32:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-098000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62de52f960740ecbed3bac1b9967c23
	  System UUID:                8502430f-0000-0000-a6ae-7be776245ae1
	  Boot ID:                    2c58ff3e-7f5d-436d-bc58-b646d91cdd24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bktcq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m23s
	  kube-system                 kube-proxy-mz4q2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x2 over 5m23s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     5m23s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     5m23s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m23s (x2 over 5m23s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m23s (x2 over 5m23s)  kubelet          Node ha-098000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeReady                5m                     kubelet          Node ha-098000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m36s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m35s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-098000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.035548] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008017] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.831418] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000001] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 4 23:33] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.189224] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.252070] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.109203] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.970887] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.250804] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.104275] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.059539] kauditd_printk_skb: 135 callbacks suppressed
	[  +0.050691] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.385557] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.100797] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.107482] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.131343] systemd-fstab-generator[1397]: Ignoring "noauto" option for root device
	[  +0.416585] systemd-fstab-generator[1555]: Ignoring "noauto" option for root device
	[  +6.808599] kauditd_printk_skb: 178 callbacks suppressed
	[ +34.877094] kauditd_printk_skb: 40 callbacks suppressed
	[Dec 4 23:34] kauditd_printk_skb: 20 callbacks suppressed
	[ +30.099102] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [06090b0373c2] <==
	{"level":"info","ts":"2024-12-04T23:34:04.808860Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-04T23:34:04.809180Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.809788Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817560Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817933Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.822840Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-04T23:34:04.823089Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.716587Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.169.0.7:58662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-12-04T23:36:12.725212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5521112234287866227 13314548521573537860)"}
	{"level":"info","ts":"2024-12-04T23:36:12.726251Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"ba5f5cb2731bb4ee","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"info","ts":"2024-12-04T23:36:12.726295Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.726504Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.726698Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726773Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726934Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727113Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","error":"context canceled"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727212Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ba5f5cb2731bb4ee","error":"failed to read ba5f5cb2731bb4ee on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-12-04T23:36:12.727230Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727376Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-12-04T23:36:12.727445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.727457Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.727465Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.733801Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.738397Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.169.0.7:59604","server-name":"","error":"read tcp 192.169.0.5:2380->192.169.0.7:59604: read: connection reset by peer"}
	
	
	==> etcd [347bf5bfb2fe] <==
	{"level":"info","ts":"2024-12-04T23:32:45.441636Z","caller":"traceutil/trace.go:171","msg":"trace[2067592358] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.930249208s","start":"2024-12-04T23:32:37.511384Z","end":"2024-12-04T23:32:45.441633Z","steps":["trace[2067592358] 'agreement among raft nodes before linearized reading'  (duration: 7.930237481s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:32:45.441645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:32:37.511358Z","time spent":"7.930284886s","remote":"127.0.0.1:53382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/12/04 23:32:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-04T23:32:45.469417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4c9eee5331caa173","rtt":"895.585µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.469450Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4c9eee5331caa173","rtt":"6.632061ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474990Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T23:32:45.475061Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-04T23:32:45.477612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477653Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477794Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477828Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477881Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477896Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.477902Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478035Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478876Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.484500Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484618Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-098000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 23:36:18 up 3 min,  0 users,  load average: 0.06, 0.12, 0.06
	Linux ha-098000 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9260f06aa616] <==
	I1204 23:35:40.730034       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:35:50.728940       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:35:50.729055       1 main.go:301] handling current node
	I1204 23:35:50.729090       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:35:50.729108       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:35:50.729688       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:35:50.729785       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:35:50.730373       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:35:50.730499       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.728959       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:36:00.729028       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:36:00.729792       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:00.729847       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.730298       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:00.730349       1 main.go:301] handling current node
	I1204 23:36:00.730594       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:00.730746       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:36:10.720296       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:10.720596       1 main.go:301] handling current node
	I1204 23:36:10.720807       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:10.720933       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:36:10.721361       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:36:10.721422       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:36:10.721602       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:10.721699       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [fdb9e4f5e8f3] <==
	I1204 23:32:15.006102       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007657       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:25.007678       1 main.go:301] handling current node
	I1204 23:32:25.007687       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:25.007690       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:25.007809       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:25.007816       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007864       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:25.007868       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:35.003703       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:35.003856       1 main.go:301] handling current node
	I1204 23:32:35.003925       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:35.004015       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:35.004440       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:35.004559       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:35.004793       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:35.004877       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:45.006980       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:45.007018       1 main.go:301] handling current node
	I1204 23:32:45.007028       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:45.007068       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:45.007194       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:45.007199       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:45.010702       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:45.010735       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [671e22f52595] <==
	W1204 23:32:45.463180       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463233       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463290       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463996       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.465107       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465150       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465162       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465210       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465693       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465858       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470160       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470650       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470675       1 watcher.go:342] watch chan error: etcdserver: no leader
	W1204 23:32:45.470803       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.471772       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471810       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471812       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471821       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471830       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471831       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471841       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471842       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471779       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471852       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471789       1 watcher.go:342] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [d11a51451327] <==
	I1204 23:33:38.594218       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1204 23:33:38.687977       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1204 23:33:38.688296       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 23:33:38.691058       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1204 23:33:38.691545       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1204 23:33:38.691575       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1204 23:33:38.691653       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 23:33:38.692048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1204 23:33:38.694556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1204 23:33:38.694668       1 aggregator.go:171] initial CRD sync complete...
	I1204 23:33:38.694729       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 23:33:38.694758       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 23:33:38.694764       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:33:38.696202       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 23:33:38.697593       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1204 23:33:38.705769       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1204 23:33:38.717725       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1204 23:33:38.717774       1 policy_source.go:224] refreshing policies
	I1204 23:33:38.734833       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:33:38.808657       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:33:38.819037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1204 23:33:38.825290       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1204 23:33:39.595794       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1204 23:33:39.838860       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W1204 23:33:59.841208       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-controller-manager [3fbffe6ec740] <==
	I1204 23:34:21.497671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:22.938128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:24.418330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.789µs"
	I1204 23:34:25.747119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:26.532160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:34:40.034377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.026µs"
	I1204 23:34:40.057548       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:40.057604       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:40.074632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.303024ms"
	I1204 23:34:40.074955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="276.731µs"
	I1204 23:34:42.740022       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:42.740245       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:42.779484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.083905ms"
	I1204 23:34:42.779600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.609µs"
	I1204 23:36:09.380863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	I1204 23:36:09.398727       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	I1204 23:36:09.473781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.224147ms"
	I1204 23:36:09.514589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.272704ms"
	I1204 23:36:09.523247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.42842ms"
	I1204 23:36:09.523526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="239.36µs"
	I1204 23:36:11.564313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.243µs"
	I1204 23:36:11.792368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.371µs"
	I1204 23:36:11.800141       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="209.77µs"
	I1204 23:36:13.480525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	E1204 23:36:13.504069       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-098000-m03\", UID:\"1a2cb970-81cd-49bb-ae75-f6ed496ff60c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-098000-m03\", UID:\"189ed158-f416-4d5e-91e6-e148874a3aad\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-098000-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [542f42367b5c] <==
	I1204 23:30:54.887539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	E1204 23:30:54.955494       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04" podCIDRs=["10.244.5.0/24"]
	E1204 23:30:54.955551       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04"
	E1204 23:30:54.955659       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-098000-m04': failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1204 23:30:54.955704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:54.963682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.113954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.398651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:58.480353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039986       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-098000-m04"
	I1204 23:30:59.109649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.147948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.198931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:04.937478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.609283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.610373       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-098000-m04"
	I1204 23:31:17.617825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:18.441772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:25.412764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:32:05.990296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m02"
	I1204 23:32:06.869398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.356069ms"
	I1204 23:32:06.870323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.454µs"
	I1204 23:32:09.287240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.687088ms"
	I1204 23:32:09.288363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="1.04846ms"
	
	
	==> kube-proxy [12aba82bb9ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:27:51.161946       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:27:51.171777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:27:51.171971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:27:51.199877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:27:51.199962       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:27:51.199995       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:27:51.202350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:27:51.202766       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:27:51.202823       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:27:51.204709       1 config.go:199] "Starting service config controller"
	I1204 23:27:51.205031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:27:51.205184       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:27:51.205227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:27:51.206547       1 config.go:328] "Starting node config controller"
	I1204 23:27:51.206855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:27:51.305717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:27:51.305831       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:27:51.307064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d500c5582d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:34:08.809079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:34:08.830727       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:34:08.830876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:34:08.863318       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:34:08.863364       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:34:08.863390       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:34:08.866204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:34:08.866652       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:34:08.866681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:34:08.868711       1 config.go:199] "Starting service config controller"
	I1204 23:34:08.869077       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:34:08.869308       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:34:08.869337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:34:08.870512       1 config.go:328] "Starting node config controller"
	I1204 23:34:08.870544       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:34:08.970002       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:34:08.970040       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:34:08.970567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a5a6b8eb38e] <==
	I1204 23:27:45.983251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:30:28.192188       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	E1204 23:30:28.192284       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	I1204 23:30:28.192310       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvhj6" node="ha-098000-m02"
	E1204 23:30:28.227861       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-rlnh2 is already present in the active queue" pod="default/busybox-7dff88458-rlnh2"
	E1204 23:30:54.897693       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vtbzp" node="ha-098000-m04"
	E1204 23:30:54.897853       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-vtbzp"
	E1204 23:30:54.897931       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pdg7h" node="ha-098000-m04"
	E1204 23:30:54.897986       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-pdg7h"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x7xvx" node="ha-098000-m04"
	E1204 23:30:54.935544       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-x7xvx"
	E1204 23:30:54.936188       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.936258       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ff5e29d-8bdb-492f-8be8-65295fb7d83f(kube-system/kindnet-bktcq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bktcq"
	E1204 23:30:54.936329       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-bktcq"
	I1204 23:30:54.936384       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.935423       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mz4q2" node="ha-098000-m04"
	E1204 23:30:54.937674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-mz4q2"
	E1204 23:30:54.939537       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c066164d-5b0a-40ca-93b9-d13c732f8d23(kube-system/kube-proxy-rgp97) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rgp97"
	E1204 23:30:54.939583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-rgp97"
	I1204 23:30:54.939599       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	I1204 23:32:45.399421       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1204 23:32:45.401282       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:32:45.403399       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1204 23:32:45.416820       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [832c9a15fccb] <==
	I1204 23:33:19.647940       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:33:30.004268       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1204 23:33:30.004311       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:33:30.004317       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:33:38.634637       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:33:38.636924       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:33:38.643589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:33:38.644074       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:33:38.644906       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:33:38.645277       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:33:38.745790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:34:37 ha-098000 kubelet[1562]: E1204 23:34:37.992685    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:34:39 ha-098000 kubelet[1562]: I1204 23:34:39.407493    1562 scope.go:117] "RemoveContainer" containerID="d45b7ca2c321bb88eb0207b6b8d2cc8e28c3a5dfeb3831e851f9d73934d05579"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: I1204 23:34:53.407334    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: E1204 23:34:53.407489    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: I1204 23:35:05.407398    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: E1204 23:35:05.407597    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:11 ha-098000 kubelet[1562]: E1204 23:35:11.434026    1562 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: I1204 23:35:18.406757    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: E1204 23:35:18.407152    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: I1204 23:35:31.407458    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: E1204 23:35:31.408574    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: I1204 23:35:42.407061    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: E1204 23:35:42.407183    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: I1204 23:35:54.407450    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: E1204 23:35:54.407806    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:36:08 ha-098000 kubelet[1562]: I1204 23:36:08.406922    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:36:11 ha-098000 kubelet[1562]: E1204 23:36:11.431349    1562 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 23:36:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-098000 -n ha-098000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-pfjg5
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-098000 describe pod busybox-7dff88458-pfjg5
helpers_test.go:282: (dbg) kubectl --context ha-098000 describe pod busybox-7dff88458-pfjg5:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-pfjg5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgxgh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fgxgh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s (x2 over 10s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-098000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-098000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-098000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACo
unt\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-098000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device
-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimization
s\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-098000 -n ha-098000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 logs -n 25
E1204 15:36:21.508619   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 logs -n 25: (3.248070541s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m04 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp testdata/cp-test.txt                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000:/home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000 sudo cat                                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m02:/home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m02 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt                                                                          | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m03:/home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n                                                                                                             | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | ha-098000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-098000 ssh -n ha-098000-m03 sudo cat                                                                                      | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | /home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-098000 node stop m02 -v=7                                                                                                 | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:31 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-098000 node start m02 -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:31 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000 -v=7                                                                                                       | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-098000 -v=7                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST | 04 Dec 24 15:32 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-098000 --wait=true -v=7                                                                                                | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:32 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-098000                                                                                                            | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	| node    | ha-098000 node delete m03 -v=7                                                                                               | ha-098000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:36 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:32:53
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:32:53.124576   20196 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:53.124878   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.124886   20196 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:53.124892   20196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:53.125142   20196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:32:53.126967   20196 out.go:352] Setting JSON to false
	I1204 15:32:53.159313   20196 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5543,"bootTime":1733349630,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:32:53.159464   20196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:53.181549   20196 out.go:177] * [ha-098000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:32:53.224271   20196 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:53.224311   20196 notify.go:220] Checking for updates...
	I1204 15:32:53.267840   20196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:32:53.289126   20196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:32:53.310338   20196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:53.331010   20196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:32:53.352255   20196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:53.373929   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:53.374098   20196 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:53.374835   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.374907   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.386958   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58600
	I1204 15:32:53.387294   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.387686   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.387699   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.387905   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.388016   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.418809   20196 out.go:177] * Using the hyperkit driver based on existing profile
	I1204 15:32:53.461003   20196 start.go:297] selected driver: hyperkit
	I1204 15:32:53.461036   20196 start.go:901] validating driver "hyperkit" against &{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.461290   20196 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:53.461477   20196 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.461727   20196 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:32:53.473875   20196 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:32:53.481311   20196 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.481337   20196 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:32:53.486904   20196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:53.486942   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:32:53.486987   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:32:53.487059   20196 start.go:340] cluster config:
	{Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor
:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:53.487162   20196 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:53.508071   20196 out.go:177] * Starting "ha-098000" primary control-plane node in "ha-098000" cluster
	I1204 15:32:53.529205   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:32:53.529292   20196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 15:32:53.529312   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:32:53.529537   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:32:53.529555   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:32:53.529727   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.530635   20196 start.go:360] acquireMachinesLock for ha-098000: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:53.530735   20196 start.go:364] duration metric: took 76.824µs to acquireMachinesLock for "ha-098000"
	I1204 15:32:53.530765   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:32:53.530784   20196 fix.go:54] fixHost starting: 
	I1204 15:32:53.531293   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:32:53.531320   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:32:53.542703   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58602
	I1204 15:32:53.543046   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:32:53.543457   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:32:53.543473   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:32:53.543695   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:32:53.543798   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.543917   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:32:53.544005   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.544085   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 19294
	I1204 15:32:53.545215   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.545258   20196 fix.go:112] recreateIfNeeded on ha-098000: state=Stopped err=<nil>
	I1204 15:32:53.545275   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	W1204 15:32:53.545373   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:32:53.586803   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000" ...
	I1204 15:32:53.608028   20196 main.go:141] libmachine: (ha-098000) Calling .Start
	I1204 15:32:53.608287   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.608354   20196 main.go:141] libmachine: (ha-098000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid
	I1204 15:32:53.610773   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 19294 missing from process table
	I1204 15:32:53.610786   20196 main.go:141] libmachine: (ha-098000) DBG | pid 19294 is in state "Stopped"
	I1204 15:32:53.610801   20196 main.go:141] libmachine: (ha-098000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid...
	I1204 15:32:53.611292   20196 main.go:141] libmachine: (ha-098000) DBG | Using UUID 70106e4e-8082-4c46-9279-8221d5ed18af
	I1204 15:32:53.728648   20196 main.go:141] libmachine: (ha-098000) DBG | Generated MAC 46:3b:47:9c:31:41
	I1204 15:32:53.728673   20196 main.go:141] libmachine: (ha-098000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:32:53.728953   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.728996   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70106e4e-8082-4c46-9279-8221d5ed18af", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000425170)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:32:53.729068   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70106e4e-8082-4c46-9279-8221d5ed18af", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:32:53.729113   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70106e4e-8082-4c46-9279-8221d5ed18af -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/ha-098000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:32:53.729129   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:32:53.730591   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 DEBUG: hyperkit: Pid is 20209
	I1204 15:32:53.731014   20196 main.go:141] libmachine: (ha-098000) DBG | Attempt 0
	I1204 15:32:53.731028   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:32:53.731114   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:32:53.732978   20196 main.go:141] libmachine: (ha-098000) DBG | Searching for 46:3b:47:9c:31:41 in /var/db/dhcpd_leases ...
	I1204 15:32:53.733030   20196 main.go:141] libmachine: (ha-098000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:32:53.733053   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:32:53.733076   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:32:53.733086   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:32:53.733096   20196 main.go:141] libmachine: (ha-098000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f369}
	I1204 15:32:53.733112   20196 main.go:141] libmachine: (ha-098000) DBG | Found match: 46:3b:47:9c:31:41
	I1204 15:32:53.733119   20196 main.go:141] libmachine: (ha-098000) DBG | IP: 192.169.0.5
	I1204 15:32:53.733163   20196 main.go:141] libmachine: (ha-098000) Calling .GetConfigRaw
	I1204 15:32:53.733987   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:32:53.734258   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:32:53.734730   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:32:53.734741   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:32:53.734939   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:32:53.735075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:32:53.735212   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735339   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:32:53.735471   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:32:53.735700   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:32:53.735888   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:32:53.735897   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:32:53.741792   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:32:53.798085   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:32:53.799084   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:53.799132   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:53.799147   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:53.799159   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.212915   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:32:54.212930   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:32:54.327517   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:32:54.327538   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:32:54.327567   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:32:54.327585   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:32:54.328504   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:32:54.328518   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:32:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:00.053293   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:00.053310   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:00.053327   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:00.080441   20196 main.go:141] libmachine: (ha-098000) DBG | 2024/12/04 15:33:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:04.805929   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:04.805956   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806123   20196 buildroot.go:166] provisioning hostname "ha-098000"
	I1204 15:33:04.806135   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.806234   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.806337   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.806431   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806539   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.806630   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.806774   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.806928   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.806937   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000 && echo "ha-098000" | sudo tee /etc/hostname
	I1204 15:33:04.881527   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000
	
	I1204 15:33:04.881546   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.881688   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:04.881782   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881867   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:04.881972   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:04.882116   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:04.882259   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:04.882270   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:04.951908   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:04.951928   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:04.951941   20196 buildroot.go:174] setting up certificates
	I1204 15:33:04.951947   20196 provision.go:84] configureAuth start
	I1204 15:33:04.951953   20196 main.go:141] libmachine: (ha-098000) Calling .GetMachineName
	I1204 15:33:04.952087   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:04.952194   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:04.952301   20196 provision.go:143] copyHostCerts
	I1204 15:33:04.952333   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952388   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:04.952396   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:04.952514   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:04.952739   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952770   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:04.952775   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:04.952846   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:04.953021   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953050   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:04.953054   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:04.953117   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:04.953299   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000 san=[127.0.0.1 192.169.0.5 ha-098000 localhost minikube]
	I1204 15:33:05.029495   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:05.029569   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:05.029587   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.029725   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.029828   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.029935   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.030021   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:05.069556   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:05.069632   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:05.088502   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:05.088560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1204 15:33:05.107211   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:05.107270   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:05.127045   20196 provision.go:87] duration metric: took 175.080758ms to configureAuth
	I1204 15:33:05.127060   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:05.127241   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:05.127255   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:05.127390   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.127495   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.127590   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.127810   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.127983   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.128112   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.128119   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:05.194828   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:05.194840   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:05.194934   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:05.194945   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.195075   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.195184   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195275   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.195365   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.195540   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.195677   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.195720   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:05.269411   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:05.269434   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:05.269574   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:05.269679   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269784   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:05.269878   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:05.270029   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:05.270180   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:05.270192   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:06.947784   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:06.947801   20196 machine.go:96] duration metric: took 13.212685267s to provisionDockerMachine
	I1204 15:33:06.947813   20196 start.go:293] postStartSetup for "ha-098000" (driver="hyperkit")
	I1204 15:33:06.947820   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:06.947830   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:06.948036   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:06.948057   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:06.948150   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:06.948258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:06.948370   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:06.948484   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:06.990689   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:06.994074   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:06.994089   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:06.994206   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:06.994349   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:06.994356   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:06.994521   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:07.005479   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:07.040997   20196 start.go:296] duration metric: took 93.160395ms for postStartSetup
	I1204 15:33:07.041019   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.041214   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:07.041227   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.041320   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.041401   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.041488   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.041577   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.079449   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:07.079522   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:07.131796   20196 fix.go:56] duration metric: took 13.600616251s for fixHost
	I1204 15:33:07.131819   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.131964   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.132056   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132147   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.132258   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.132400   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:07.132541   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1204 15:33:07.132548   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:07.198066   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355187.085615924
	
	I1204 15:33:07.198080   20196 fix.go:216] guest clock: 1733355187.085615924
	I1204 15:33:07.198085   20196 fix.go:229] Guest: 2024-12-04 15:33:07.085615924 -0800 PST Remote: 2024-12-04 15:33:07.131808 -0800 PST m=+14.052161483 (delta=-46.192076ms)
	I1204 15:33:07.198107   20196 fix.go:200] guest clock delta is within tolerance: -46.192076ms
	I1204 15:33:07.198113   20196 start.go:83] releasing machines lock for "ha-098000", held for 13.666979222s
	I1204 15:33:07.198132   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198272   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:07.198375   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198673   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198785   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:07.198878   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:07.198921   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.198947   20196 ssh_runner.go:195] Run: cat /version.json
	I1204 15:33:07.198968   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:07.199026   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199093   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:07.199123   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199209   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:07.199228   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199298   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.199315   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:07.199396   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:07.233868   20196 ssh_runner.go:195] Run: systemctl --version
	I1204 15:33:07.278985   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:33:07.283423   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:07.283478   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:07.298510   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:07.298524   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.298651   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.315201   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:07.324137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:07.332963   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.333027   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:07.341883   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.350757   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:07.359678   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:07.368612   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:07.377607   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:07.386447   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:07.395124   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:07.404070   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:07.412097   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:07.412157   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:07.421208   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:07.429418   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.524346   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:07.542570   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:07.542668   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:07.559288   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.569950   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:07.583434   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:07.593916   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.603881   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:07.624337   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:07.634820   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:07.649640   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:07.652619   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:07.659817   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:07.673288   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:07.772876   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:07.878665   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:07.878744   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:07.892585   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:07.986161   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:10.248338   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.262094537s)
	I1204 15:33:10.248412   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:10.259004   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:10.272350   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.282710   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:10.373201   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:10.481588   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.590503   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:10.604294   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:10.614461   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:10.704083   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:10.769517   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:10.769615   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:10.774192   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:10.774266   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:10.777449   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:10.800815   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:10.800899   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.817205   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:10.856841   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:10.856890   20196 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:33:10.857354   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:10.862069   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:10.871775   20196 kubeadm.go:883] updating cluster {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-st
orageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 15:33:10.871875   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:10.871949   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.885784   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.885796   20196 docker.go:619] Images already preloaded, skipping extraction
	I1204 15:33:10.885882   20196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:33:10.904423   20196 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1204 15:33:10.904444   20196 cache_images.go:84] Images are preloaded, skipping loading
	I1204 15:33:10.904450   20196 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1204 15:33:10.904531   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:10.904612   20196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:33:10.937949   20196 cni.go:84] Creating CNI manager for ""
	I1204 15:33:10.937963   20196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1204 15:33:10.937974   20196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:33:10.938009   20196 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-098000 NodeName:ha-098000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:33:10.938085   20196 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-098000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:33:10.938101   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:10.938174   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:10.950599   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:10.950678   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:10.950747   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:10.959008   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:10.959066   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1204 15:33:10.966355   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1204 15:33:10.979785   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:10.993124   20196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1204 15:33:11.007280   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:11.020699   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:11.023569   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:11.032639   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:11.133629   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:11.148832   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.5
	I1204 15:33:11.148845   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:11.148855   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.149029   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:11.149085   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:11.149095   20196 certs.go:256] generating profile certs ...
	I1204 15:33:11.149184   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:11.149204   20196 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330
	I1204 15:33:11.149219   20196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1204 15:33:11.369000   20196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 ...
	I1204 15:33:11.369023   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330: {Name:mkee72feeeccd665b141717d3a28fdfb2c7bde31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369371   20196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 ...
	I1204 15:33:11.369381   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330: {Name:mk73951855cf52179c105169e788f46cc4d39a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.369660   20196 certs.go:381] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt
	I1204 15:33:11.369853   20196 certs.go:385] copying /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.edefc330 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key
	I1204 15:33:11.370068   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:11.370078   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:11.370100   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:11.370120   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:11.370139   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:11.370157   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:11.370176   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:11.370196   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:11.370213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:11.370295   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:11.370331   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:11.370340   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:11.370387   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:11.370418   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:11.370453   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:11.370519   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:11.370552   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.370573   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.370591   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.371058   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:11.399000   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:11.441701   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:11.476788   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:11.508692   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:11.528963   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:11.548308   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:11.567414   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:11.586589   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:11.605437   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:11.624356   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:11.643314   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:33:11.656890   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:11.661063   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:11.670050   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673329   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.673378   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:11.677431   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:11.686327   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:11.695205   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698569   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.698616   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:11.702683   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:11.711573   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:11.720441   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723730   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.723772   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:11.727893   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:11.736772   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:11.740128   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:11.744800   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:11.749129   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:11.753890   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:11.758287   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:11.762608   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:11.766918   20196 kubeadm.go:392] StartCluster: {Name:ha-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:11.767041   20196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:33:11.779240   20196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:33:11.787479   20196 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:33:11.787491   20196 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:33:11.787539   20196 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:33:11.796840   20196 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:33:11.797140   20196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-098000" does not appear in /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.797223   20196 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-17258/kubeconfig needs updating (will repair): [kubeconfig missing "ha-098000" cluster setting kubeconfig missing "ha-098000" context setting]
	I1204 15:33:11.797420   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.797819   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.798024   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:33:11.798341   20196 cert_rotation.go:140] Starting client certificate rotation controller
	I1204 15:33:11.798533   20196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:33:11.806274   20196 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1204 15:33:11.806292   20196 kubeadm.go:597] duration metric: took 18.792967ms to restartPrimaryControlPlane
	I1204 15:33:11.806299   20196 kubeadm.go:394] duration metric: took 39.384435ms to StartCluster
	I1204 15:33:11.806313   20196 settings.go:142] acquiring lock: {Name:mk99ad63e4feda725ee10448138b299c26bf8cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.806400   20196 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:11.806790   20196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-17258/kubeconfig: {Name:mk988c2800ea459104871ce2a5d515d71b51f8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.807009   20196 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:11.807022   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:11.807035   20196 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:33:11.807145   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.850133   20196 out.go:177] * Enabled addons: 
	I1204 15:33:11.871157   20196 addons.go:510] duration metric: took 64.116535ms for enable addons: enabled=[]
	I1204 15:33:11.871244   20196 start.go:246] waiting for cluster config update ...
	I1204 15:33:11.871256   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:11.894284   20196 out.go:201] 
	I1204 15:33:11.915277   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.915378   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.939339   20196 out.go:177] * Starting "ha-098000-m02" control-plane node in "ha-098000" cluster
	I1204 15:33:11.981186   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:11.981222   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:11.981421   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:11.981442   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:11.981558   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:11.982398   20196 start.go:360] acquireMachinesLock for ha-098000-m02: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:11.982475   20196 start.go:364] duration metric: took 58.776µs to acquireMachinesLock for "ha-098000-m02"
	I1204 15:33:11.982495   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:11.982501   20196 fix.go:54] fixHost starting: m02
	I1204 15:33:11.982818   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:11.982845   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:11.994288   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58624
	I1204 15:33:11.994640   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:11.995007   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:11.995021   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:11.995253   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:11.995373   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:11.995490   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:33:11.995578   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:11.995648   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20139
	I1204 15:33:11.996810   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:11.996835   20196 fix.go:112] recreateIfNeeded on ha-098000-m02: state=Stopped err=<nil>
	I1204 15:33:11.996847   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	W1204 15:33:11.996942   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:12.039213   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m02" ...
	I1204 15:33:12.060086   20196 main.go:141] libmachine: (ha-098000-m02) Calling .Start
	I1204 15:33:12.060346   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.060380   20196 main.go:141] libmachine: (ha-098000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid
	I1204 15:33:12.061608   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20139 missing from process table
	I1204 15:33:12.061617   20196 main.go:141] libmachine: (ha-098000-m02) DBG | pid 20139 is in state "Stopped"
	I1204 15:33:12.061626   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid...
	I1204 15:33:12.061806   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Using UUID 2486faac-afab-449a-8055-5ee234f7d16f
	I1204 15:33:12.086653   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Generated MAC b2:39:f5:23:0b:32
	I1204 15:33:12.086676   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:12.086820   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086851   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2486faac-afab-449a-8055-5ee234f7d16f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004233b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:12.086887   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2486faac-afab-449a-8055-5ee234f7d16f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:12.086920   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2486faac-afab-449a-8055-5ee234f7d16f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/ha-098000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:12.086929   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:12.088450   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 DEBUG: hyperkit: Pid is 20220
	I1204 15:33:12.088937   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Attempt 0
	I1204 15:33:12.088953   20196 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:12.089027   20196 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:33:12.090875   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Searching for b2:39:f5:23:0b:32 in /var/db/dhcpd_leases ...
	I1204 15:33:12.090963   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:12.090982   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:12.091003   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:12.091026   20196 main.go:141] libmachine: (ha-098000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f47a}
	I1204 15:33:12.091037   20196 main.go:141] libmachine: (ha-098000-m02) DBG | Found match: b2:39:f5:23:0b:32
	I1204 15:33:12.091047   20196 main.go:141] libmachine: (ha-098000-m02) DBG | IP: 192.169.0.6
	I1204 15:33:12.091078   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetConfigRaw
	I1204 15:33:12.091745   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:12.091957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:12.092493   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:12.092503   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:12.092649   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:12.092776   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:12.092901   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093004   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:12.093096   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:12.093267   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:12.093463   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:12.093473   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:12.099465   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:12.108663   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:12.109633   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.109661   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.109674   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.109689   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.508437   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:12.508452   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:12.623247   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:12.623267   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:12.623283   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:12.623289   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:12.624086   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:12.624095   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:18.362951   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1204 15:33:18.362990   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1204 15:33:18.362997   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1204 15:33:18.387781   20196 main.go:141] libmachine: (ha-098000-m02) DBG | 2024/12/04 15:33:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1204 15:33:23.149238   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:23.149254   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149403   20196 buildroot.go:166] provisioning hostname "ha-098000-m02"
	I1204 15:33:23.149415   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.149509   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.149612   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.149697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149796   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.149882   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.150012   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.150165   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.150173   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m02 && echo "ha-098000-m02" | sudo tee /etc/hostname
	I1204 15:33:23.207677   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m02
	
	I1204 15:33:23.207693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.207831   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.207942   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208053   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.208156   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.208340   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.208503   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.208515   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:23.265398   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:23.265414   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:23.265426   20196 buildroot.go:174] setting up certificates
	I1204 15:33:23.265434   20196 provision.go:84] configureAuth start
	I1204 15:33:23.265443   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetMachineName
	I1204 15:33:23.265604   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:23.265696   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.265792   20196 provision.go:143] copyHostCerts
	I1204 15:33:23.265821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.265868   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:23.265874   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:23.266044   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:23.266308   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266347   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:23.266352   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:23.266606   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:23.266780   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266810   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:23.266815   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:23.266891   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:23.267067   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m02 san=[127.0.0.1 192.169.0.6 ha-098000-m02 localhost minikube]
	I1204 15:33:23.418588   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:23.418649   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:23.418663   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.418794   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.418895   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.418994   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.419094   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:23.449777   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:23.449845   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:23.469736   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:23.469808   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:23.489512   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:23.489573   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:33:23.509353   20196 provision.go:87] duration metric: took 243.902721ms to configureAuth
	I1204 15:33:23.509367   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:23.509536   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:23.509550   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:23.509693   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.509787   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.509886   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.509981   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.510059   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.510190   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.510321   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.510328   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:23.557917   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:23.557929   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:23.558018   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:23.558034   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.558154   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.558255   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558337   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.558428   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.558600   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.558722   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.558764   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:23.619577   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:23.619599   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:23.619741   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:23.619853   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.619941   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:23.620042   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:23.620196   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:23.620336   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:23.620348   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:25.265062   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:25.265078   20196 machine.go:96] duration metric: took 13.172205227s to provisionDockerMachine
	I1204 15:33:25.265092   20196 start.go:293] postStartSetup for "ha-098000-m02" (driver="hyperkit")
	I1204 15:33:25.265099   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:25.265111   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.265311   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:25.265332   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.265441   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.265529   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.265633   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.265739   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.304266   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:25.311180   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:25.311193   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:25.311283   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:25.311424   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:25.311431   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:25.311607   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:25.324859   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:25.357942   20196 start.go:296] duration metric: took 92.839826ms for postStartSetup
	I1204 15:33:25.357966   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.358160   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:25.358173   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.358261   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.358352   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.358436   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.358521   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.389685   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:25.389754   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:25.422337   20196 fix.go:56] duration metric: took 13.439453986s for fixHost
	I1204 15:33:25.422364   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.422533   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.422647   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422735   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.422815   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.422958   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:25.423099   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1204 15:33:25.423107   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:25.472632   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355205.621764225
	
	I1204 15:33:25.472647   20196 fix.go:216] guest clock: 1733355205.621764225
	I1204 15:33:25.472652   20196 fix.go:229] Guest: 2024-12-04 15:33:25.621764225 -0800 PST Remote: 2024-12-04 15:33:25.422353 -0800 PST m=+32.342189685 (delta=199.411225ms)
	I1204 15:33:25.472663   20196 fix.go:200] guest clock delta is within tolerance: 199.411225ms
	I1204 15:33:25.472667   20196 start.go:83] releasing machines lock for "ha-098000-m02", held for 13.489803052s
	I1204 15:33:25.472697   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.472837   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:25.496277   20196 out.go:177] * Found network options:
	I1204 15:33:25.537194   20196 out.go:177]   - NO_PROXY=192.169.0.5
	W1204 15:33:25.558335   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.558422   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559432   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559728   20196 main.go:141] libmachine: (ha-098000-m02) Calling .DriverName
	I1204 15:33:25.559899   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:25.559950   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	W1204 15:33:25.560026   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:25.560173   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:25.560212   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHHostname
	I1204 15:33:25.560218   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560413   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHPort
	I1204 15:33:25.560435   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560588   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHKeyPath
	I1204 15:33:25.560653   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560755   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetSSHUsername
	I1204 15:33:25.560803   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	I1204 15:33:25.560929   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m02/id_rsa Username:docker}
	W1204 15:33:25.589676   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:25.589750   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:25.635633   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:25.635654   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.635765   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.651707   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:25.660095   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:25.668588   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:25.668650   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:25.676830   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.685079   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:25.693509   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:25.701733   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:25.710137   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:25.718450   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:25.726929   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:25.735114   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:25.742569   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:25.742622   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:25.751585   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:25.759751   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:25.851537   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:25.870178   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:25.870261   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:25.886777   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.898631   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:25.915954   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:25.927090   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.937345   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:25.958314   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:25.968609   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:25.983636   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:25.986491   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:25.993508   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:26.006712   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:26.100912   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:26.190828   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:26.190859   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:26.204976   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:26.305524   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:33:28.666691   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.361082583s)
	I1204 15:33:28.666774   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:33:28.677849   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:33:28.691293   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:28.702315   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:33:28.804235   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:33:28.895456   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.008598   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:33:29.022244   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:33:29.033285   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:29.123647   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:33:29.194113   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:33:29.194213   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:33:29.198266   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:33:29.198329   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:33:29.201217   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:33:29.226480   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:33:29.226574   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.245410   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:33:29.286251   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:33:29.327924   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:33:29.348859   20196 main.go:141] libmachine: (ha-098000-m02) Calling .GetIP
	I1204 15:33:29.349296   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:33:29.353761   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.363356   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:33:29.363524   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:29.363748   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.363768   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.374807   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58646
	I1204 15:33:29.375120   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.375473   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.375491   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.375697   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.375799   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:33:29.375885   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:29.375946   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:33:29.377121   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:33:29.377369   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:29.377393   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:29.388419   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58648
	I1204 15:33:29.388721   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:29.389015   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:29.389049   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:29.389281   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:29.389378   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:33:29.389495   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.6
	I1204 15:33:29.389501   20196 certs.go:194] generating shared ca certs ...
	I1204 15:33:29.389513   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:29.389656   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:33:29.389710   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:33:29.389719   20196 certs.go:256] generating profile certs ...
	I1204 15:33:29.389811   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:33:29.389878   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.3ecf7e1a
	I1204 15:33:29.389931   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:33:29.389938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:33:29.389964   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:33:29.389985   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:33:29.390009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:33:29.390029   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:33:29.390048   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:33:29.390067   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:33:29.390086   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:33:29.390163   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:33:29.390207   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:33:29.390215   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:33:29.390250   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:33:29.390285   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:33:29.390316   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:33:29.390382   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:29.390418   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.390439   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.390458   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.390483   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:33:29.390568   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:33:29.390658   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:33:29.390751   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:33:29.390833   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:33:29.422140   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:33:29.425696   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:33:29.434269   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:33:29.437377   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:33:29.446042   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:33:29.449183   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:33:29.457490   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:33:29.460647   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:33:29.469352   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:33:29.472755   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:33:29.481093   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:33:29.484099   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:33:29.492651   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:33:29.513068   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:33:29.533396   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:33:29.553633   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:33:29.573360   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:33:29.592833   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:33:29.612325   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:33:29.631705   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:33:29.651772   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:33:29.671647   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:33:29.691028   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:33:29.710680   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:33:29.724088   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:33:29.738048   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:33:29.751781   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:33:29.765280   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:33:29.779127   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:33:29.792641   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:33:29.806335   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:33:29.810643   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:33:29.819095   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822486   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.822534   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:33:29.826729   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:33:29.835308   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:33:29.843890   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847451   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.847503   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:33:29.851708   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:33:29.859922   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:33:29.868147   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871612   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.871654   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:33:29.875808   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:33:29.884074   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:33:29.887539   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:33:29.891899   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:33:29.896170   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:33:29.900557   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:33:29.904814   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:33:29.909235   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:33:29.913504   20196 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1204 15:33:29.913564   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:33:29.913578   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:33:29.913625   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:33:29.926130   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:33:29.926164   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:33:29.926229   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:33:29.933952   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:33:29.934013   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:33:29.941532   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:33:29.955276   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:33:29.968570   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:33:29.982327   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:33:29.985248   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:33:29.994738   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.085095   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.100297   20196 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:30.100505   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:30.121980   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:33:30.163546   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:30.296003   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:33:30.317056   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:33:30.317267   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:33:30.317312   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:33:30.317488   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:30.317571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:30.317576   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:30.317583   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:30.317592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.429719   20196 round_trippers.go:574] Response Status: 200 OK in 8111 milliseconds
	I1204 15:33:38.437420   20196 node_ready.go:49] node "ha-098000-m02" has status "Ready":"True"
	I1204 15:33:38.437441   20196 node_ready.go:38] duration metric: took 8.119707596s for node "ha-098000-m02" to be "Ready" ...
	I1204 15:33:38.437450   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:38.437502   20196 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:33:38.437515   20196 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:33:38.437571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:38.437578   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.437593   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.437599   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.455661   20196 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1204 15:33:38.464148   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.464210   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:33:38.464215   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.464221   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.464224   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.470699   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.471292   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.471302   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.471308   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.471312   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.481534   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:33:38.481959   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.481970   20196 pod_ready.go:82] duration metric: took 17.803771ms for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.481977   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.482020   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:33:38.482026   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.482032   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.482035   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.487605   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.488267   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.488322   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.488329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.488343   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.490575   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.491180   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.491192   20196 pod_ready.go:82] duration metric: took 9.208421ms for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491202   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.491280   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:33:38.491287   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.491293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.491297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.494530   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:38.495165   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:38.495173   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.495180   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.495184   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.499549   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.499961   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.499972   20196 pod_ready.go:82] duration metric: took 8.763238ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.499980   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.500023   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:33:38.500028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.500034   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.500039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.506409   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:33:38.506828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:38.506837   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.506843   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.506846   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.511940   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:38.512316   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.512327   20196 pod_ready.go:82] duration metric: took 12.340986ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512334   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.512373   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:33:38.512378   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.512384   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.512389   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.516730   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:38.638087   20196 request.go:632] Waited for 120.794515ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638124   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:38.638130   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.638161   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.638169   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.640203   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:38.640614   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:38.640625   20196 pod_ready.go:82] duration metric: took 128.282ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.640638   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:38.838617   20196 request.go:632] Waited for 197.931176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838688   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:33:38.838697   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:38.838706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:38.838712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:38.840867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.037679   20196 request.go:632] Waited for 196.178205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037714   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:39.037719   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.037772   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.037777   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.042421   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:39.042726   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.042736   20196 pod_ready.go:82] duration metric: took 402.080499ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.042743   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.237786   20196 request.go:632] Waited for 195.001118ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237820   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:33:39.237825   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.237830   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.237835   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.243495   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:39.437668   20196 request.go:632] Waited for 193.740455ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:39.437706   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.437712   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.437719   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.440123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:39.440472   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.440482   20196 pod_ready.go:82] duration metric: took 397.72282ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.440490   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.638172   20196 request.go:632] Waited for 197.630035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638227   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:33:39.638235   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.638277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.638301   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.641465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.837863   20196 request.go:632] Waited for 195.844278ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837914   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:39.837923   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:39.838008   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:39.838017   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:39.841077   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:39.841414   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:39.841423   20196 pod_ready.go:82] duration metric: took 400.91619ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:39.841431   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.037805   20196 request.go:632] Waited for 196.32052ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:33:40.037845   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.037851   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.037857   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.040255   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.238963   20196 request.go:632] Waited for 198.140778ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:40.239028   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.239040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.239045   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.242092   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:40.242401   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.242411   20196 pod_ready.go:82] duration metric: took 400.963216ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.242419   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.438693   20196 request.go:632] Waited for 196.229899ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438729   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:33:40.438735   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.438741   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.438745   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.441139   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.637709   20196 request.go:632] Waited for 196.13524ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637752   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:40.637777   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.637783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.637787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.640278   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:40.640704   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:40.640714   20196 pod_ready.go:82] duration metric: took 398.278068ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.640722   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:40.838825   20196 request.go:632] Waited for 198.055929ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838901   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:33:40.838908   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:40.838927   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:40.838932   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:40.841541   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.037964   20196 request.go:632] Waited for 195.880635ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038037   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:41.038043   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.038049   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.038054   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.041754   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.042231   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.042241   20196 pod_ready.go:82] duration metric: took 401.502224ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.042248   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.237873   20196 request.go:632] Waited for 195.582123ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237946   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:33:41.237952   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.237957   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.237961   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.240730   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.438126   20196 request.go:632] Waited for 196.947205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438157   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:41.438167   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.438207   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.438212   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.440777   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:33:41.441074   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.441084   20196 pod_ready.go:82] duration metric: took 398.818652ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.441091   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.639164   20196 request.go:632] Waited for 198.003801ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639309   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:33:41.639320   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.639331   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.639338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.643045   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.838863   20196 request.go:632] Waited for 195.192063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838912   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:41.838924   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:41.838946   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:41.838954   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:41.842314   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:41.842750   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:41.842763   20196 pod_ready.go:82] duration metric: took 401.652541ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:41.842771   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.039281   20196 request.go:632] Waited for 196.459472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039417   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:33:42.039428   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.039439   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.039447   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.042816   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.238811   20196 request.go:632] Waited for 195.378249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238885   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:33:42.238891   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.238898   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.238903   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.240764   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.241072   20196 pod_ready.go:93] pod "kube-proxy-mz4q2" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.241084   20196 pod_ready.go:82] duration metric: took 398.294263ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.241092   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.438843   20196 request.go:632] Waited for 197.705446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:33:42.438898   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.438905   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.438908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.440868   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:42.638818   20196 request.go:632] Waited for 197.361352ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638884   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:42.638895   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.638906   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.638914   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.642158   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:42.642556   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:42.642569   20196 pod_ready.go:82] duration metric: took 401.459636ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.642580   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:42.839526   20196 request.go:632] Waited for 196.890487ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839701   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:33:42.839713   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:42.839724   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:42.839732   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:42.843198   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.037789   20196 request.go:632] Waited for 194.105591ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037944   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:33:43.037961   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.037975   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.037982   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.041343   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.041920   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.041933   20196 pod_ready.go:82] duration metric: took 399.3347ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.041942   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.239892   20196 request.go:632] Waited for 197.874831ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239961   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:33:43.239969   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.239983   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.239991   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.243085   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.438099   20196 request.go:632] Waited for 194.176391ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438141   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:33:43.438168   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.438176   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.438185   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.440115   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:33:43.440578   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.440586   20196 pod_ready.go:82] duration metric: took 398.625667ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.440601   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.639811   20196 request.go:632] Waited for 199.133254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639908   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:33:43.639919   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.639930   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.639940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.643164   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.839903   20196 request.go:632] Waited for 196.135821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839967   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:33:43.839976   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.839987   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.839994   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.843566   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:33:43.844161   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:33:43.844175   20196 pod_ready.go:82] duration metric: took 403.555453ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:33:43.844208   20196 pod_ready.go:39] duration metric: took 5.406590624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:33:43.844253   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:33:43.844326   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:33:43.855983   20196 api_server.go:72] duration metric: took 13.755275558s to wait for apiserver process to appear ...
	I1204 15:33:43.855995   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:33:43.856010   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:33:43.860186   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:33:43.860225   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:33:43.860230   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:43.860243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:43.860246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:43.860683   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:33:43.860804   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:33:43.860815   20196 api_server.go:131] duration metric: took 4.815788ms to wait for apiserver health ...
	I1204 15:33:43.860824   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:33:44.038297   20196 request.go:632] Waited for 177.420142ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.038399   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.038411   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.038421   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.044078   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.049007   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:33:44.049023   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.049029   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.049032   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.049034   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.049038   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.049041   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.049043   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.049046   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.049049   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.049051   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.049054   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.049056   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.049059   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.049069   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.049073   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.049075   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.049078   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.049080   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.049084   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.049087   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.049089   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.049092   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.049094   20196 system_pods.go:61] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.049097   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.049099   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.049102   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.049106   20196 system_pods.go:74] duration metric: took 188.271977ms to wait for pod list to return data ...
	I1204 15:33:44.049112   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:33:44.239205   20196 request.go:632] Waited for 190.005694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239263   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:33:44.239272   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.239283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.239322   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.243527   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.243704   20196 default_sa.go:45] found service account: "default"
	I1204 15:33:44.243713   20196 default_sa.go:55] duration metric: took 194.591962ms for default service account to be created ...
	I1204 15:33:44.243719   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:33:44.439115   20196 request.go:632] Waited for 195.322716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:33:44.439246   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.439258   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.439264   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.444755   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:33:44.449718   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:33:44.449733   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:33:44.449738   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:33:44.449741   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:33:44.449744   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:33:44.449748   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:33:44.449750   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:33:44.449753   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:33:44.449755   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:33:44.449758   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:33:44.449761   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:33:44.449765   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:33:44.449768   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:33:44.449771   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:33:44.449774   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:33:44.449777   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:33:44.449783   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:33:44.449786   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:33:44.449789   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:33:44.449793   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:33:44.449795   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:33:44.449798   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:33:44.449801   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:33:44.449804   20196 system_pods.go:89] "kube-vip-ha-098000" [618bf60c-e57e-4c04-832e-71eebf18044d] Running
	I1204 15:33:44.449806   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:33:44.449810   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:33:44.449813   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running
	I1204 15:33:44.449818   20196 system_pods.go:126] duration metric: took 206.089298ms to wait for k8s-apps to be running ...
	I1204 15:33:44.449823   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:33:44.449890   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:33:44.461452   20196 system_svc.go:56] duration metric: took 11.623487ms WaitForService to wait for kubelet
	I1204 15:33:44.461466   20196 kubeadm.go:582] duration metric: took 14.360743481s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:33:44.461484   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:33:44.639462   20196 request.go:632] Waited for 177.925125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:33:44.639548   20196 round_trippers.go:469] Request Headers:
	I1204 15:33:44.639560   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:33:44.639568   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:33:44.643595   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:33:44.644812   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644828   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644839   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644849   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644858   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:33:44.644861   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:33:44.644864   20196 node_conditions.go:105] duration metric: took 183.370218ms to run NodePressure ...
	I1204 15:33:44.644872   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:33:44.644890   20196 start.go:255] writing updated cluster config ...
	I1204 15:33:44.665849   20196 out.go:201] 
	I1204 15:33:44.687912   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:44.688042   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.710522   20196 out.go:177] * Starting "ha-098000-m03" control-plane node in "ha-098000" cluster
	I1204 15:33:44.752466   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:44.752500   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:44.752679   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:33:44.752697   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:44.752830   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.753998   20196 start.go:360] acquireMachinesLock for ha-098000-m03: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:44.754068   20196 start.go:364] duration metric: took 52.377µs to acquireMachinesLock for "ha-098000-m03"
	I1204 15:33:44.754085   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:33:44.754091   20196 fix.go:54] fixHost starting: m03
	I1204 15:33:44.754406   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:33:44.754430   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:33:44.765918   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58653
	I1204 15:33:44.766304   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:33:44.766704   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:33:44.766719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:33:44.766938   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:33:44.767056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.767166   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetState
	I1204 15:33:44.767251   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.767322   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 19347
	I1204 15:33:44.768480   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.768517   20196 fix.go:112] recreateIfNeeded on ha-098000-m03: state=Stopped err=<nil>
	I1204 15:33:44.768528   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	W1204 15:33:44.768610   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:33:44.789653   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m03" ...
	I1204 15:33:44.831751   20196 main.go:141] libmachine: (ha-098000-m03) Calling .Start
	I1204 15:33:44.832023   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.832066   20196 main.go:141] libmachine: (ha-098000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid
	I1204 15:33:44.834593   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid 19347 missing from process table
	I1204 15:33:44.834606   20196 main.go:141] libmachine: (ha-098000-m03) DBG | pid 19347 is in state "Stopped"
	I1204 15:33:44.834626   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid...
	I1204 15:33:44.835523   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Using UUID eac2e001-90c5-40d6-830d-b844e6baedeb
	I1204 15:33:44.861764   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Generated MAC 56:f8:e7:bc:e7:07
	I1204 15:33:44.861784   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:33:44.862005   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862041   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eac2e001-90c5-40d6-830d-b844e6baedeb", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000468330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:33:44.862100   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "eac2e001-90c5-40d6-830d-b844e6baedeb", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:33:44.862139   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U eac2e001-90c5-40d6-830d-b844e6baedeb -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/ha-098000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:33:44.862604   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:33:44.864474   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 DEBUG: hyperkit: Pid is 20231
	I1204 15:33:44.864862   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Attempt 0
	I1204 15:33:44.864878   20196 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:33:44.864933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 20231
	I1204 15:33:44.866074   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Searching for 56:f8:e7:bc:e7:07 in /var/db/dhcpd_leases ...
	I1204 15:33:44.866145   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:33:44.866158   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:33:44.866167   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:33:44.866177   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:33:44.866182   20196 main.go:141] libmachine: (ha-098000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f3e2}
	I1204 15:33:44.866187   20196 main.go:141] libmachine: (ha-098000-m03) DBG | Found match: 56:f8:e7:bc:e7:07
	I1204 15:33:44.866193   20196 main.go:141] libmachine: (ha-098000-m03) DBG | IP: 192.169.0.7
	I1204 15:33:44.866266   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetConfigRaw
	I1204 15:33:44.866960   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:44.867187   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:33:44.867733   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:33:44.867748   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:44.867880   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:44.867991   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:44.868083   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868175   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:44.868275   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:44.868449   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:44.868607   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:44.868615   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:33:44.875700   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:33:44.885221   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:33:44.886534   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:44.886590   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:44.886624   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:44.886641   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.310864   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:33:45.310888   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:33:45.426378   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:33:45.426408   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:33:45.426418   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:33:45.426427   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:33:45.427201   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:33:45.427213   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:33:51.200443   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:33:51.200513   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:33:51.200524   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:33:51.225933   20196 main.go:141] libmachine: (ha-098000-m03) DBG | 2024/12/04 15:33:51 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:33:55.935290   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:33:55.935305   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935436   20196 buildroot.go:166] provisioning hostname "ha-098000-m03"
	I1204 15:33:55.935445   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:55.935551   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:55.935640   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:55.935732   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935825   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:55.935912   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:55.936073   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:55.936205   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:55.936213   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m03 && echo "ha-098000-m03" | sudo tee /etc/hostname
	I1204 15:33:56.008649   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m03
	
	I1204 15:33:56.008663   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.008821   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.008915   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009001   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.009093   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.009247   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.009386   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.009397   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:33:56.076925   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:33:56.076941   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:33:56.076950   20196 buildroot.go:174] setting up certificates
	I1204 15:33:56.076956   20196 provision.go:84] configureAuth start
	I1204 15:33:56.076962   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetMachineName
	I1204 15:33:56.077121   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:56.077219   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.077318   20196 provision.go:143] copyHostCerts
	I1204 15:33:56.077346   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077405   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:33:56.077411   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:33:56.077538   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:33:56.077740   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077775   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:33:56.077780   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:33:56.077851   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:33:56.078007   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078036   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:33:56.078041   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:33:56.078135   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:33:56.078295   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m03 san=[127.0.0.1 192.169.0.7 ha-098000-m03 localhost minikube]
	I1204 15:33:56.184360   20196 provision.go:177] copyRemoteCerts
	I1204 15:33:56.184421   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:33:56.184436   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.184584   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.184682   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.184788   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.184878   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:56.222358   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:33:56.222423   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:33:56.242527   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:33:56.242598   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:33:56.262411   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:33:56.262492   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:33:56.282604   20196 provision.go:87] duration metric: took 205.634097ms to configureAuth
	I1204 15:33:56.282619   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:33:56.282802   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:56.282816   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:56.282954   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.283056   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.283161   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283267   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.283366   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.283498   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.283620   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.283628   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:33:56.345040   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:33:56.345053   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:33:56.345129   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:33:56.345143   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.345280   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.345367   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345443   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.345524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.345668   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.345805   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.345851   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:33:56.424345   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:33:56.424363   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:56.424517   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:56.424685   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424787   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:56.424878   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:56.425031   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:56.425156   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:56.425173   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:33:58.122525   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:33:58.122539   20196 machine.go:96] duration metric: took 13.254423135s to provisionDockerMachine
	I1204 15:33:58.122547   20196 start.go:293] postStartSetup for "ha-098000-m03" (driver="hyperkit")
	I1204 15:33:58.122554   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:33:58.122566   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.122762   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:33:58.122783   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.122871   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.122946   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.123045   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.123137   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.161639   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:33:58.164739   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:33:58.164749   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:33:58.164831   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:33:58.164968   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:33:58.164974   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:33:58.165140   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:33:58.173027   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:33:58.192093   20196 start.go:296] duration metric: took 69.536473ms for postStartSetup
	I1204 15:33:58.192114   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.192306   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:33:58.192320   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.192414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.192509   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.192600   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.192674   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.230841   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:33:58.230926   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:33:58.265220   20196 fix.go:56] duration metric: took 13.510737637s for fixHost
	I1204 15:33:58.265271   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.265414   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.265524   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265620   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.265713   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.265865   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:33:58.266013   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1204 15:33:58.266021   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:33:58.330663   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355238.486070391
	
	I1204 15:33:58.330676   20196 fix.go:216] guest clock: 1733355238.486070391
	I1204 15:33:58.330682   20196 fix.go:229] Guest: 2024-12-04 15:33:58.486070391 -0800 PST Remote: 2024-12-04 15:33:58.265237 -0800 PST m=+65.184150423 (delta=220.833391ms)
	I1204 15:33:58.330692   20196 fix.go:200] guest clock delta is within tolerance: 220.833391ms
	I1204 15:33:58.330696   20196 start.go:83] releasing machines lock for "ha-098000-m03", held for 13.576240131s
	I1204 15:33:58.330714   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.330854   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:33:58.352510   20196 out.go:177] * Found network options:
	I1204 15:33:58.380745   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1204 15:33:58.401983   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402013   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402029   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402504   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402654   20196 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:33:58.402766   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:33:58.402819   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	W1204 15:33:58.402881   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:33:58.402902   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:33:58.402977   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403000   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:33:58.403012   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:33:58.403174   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403214   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:33:58.403349   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403358   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:33:58.403564   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:33:58.403575   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:33:58.403741   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	W1204 15:33:58.437750   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:33:58.437828   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:33:58.485243   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:33:58.485257   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.485329   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.514237   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:33:58.528266   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:33:58.539804   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:33:58.539880   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:33:58.555961   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.566195   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:33:58.575257   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:33:58.584192   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:33:58.593620   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:33:58.603021   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:33:58.612370   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:33:58.621502   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:33:58.630294   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:33:58.630368   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:33:58.640300   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:33:58.648626   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:58.742860   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:33:58.760057   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:33:58.760138   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:33:58.778296   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.793165   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:33:58.807402   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:33:58.818936   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.829930   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:33:58.849768   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:33:58.861249   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:33:58.876335   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:33:58.879342   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:33:58.887395   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:33:58.901271   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:33:59.012726   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:33:59.108627   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:33:59.108651   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:33:59.122518   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:33:59.224950   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:01.525196   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.300161441s)
	I1204 15:34:01.525275   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:34:01.537533   20196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:34:01.552928   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.564251   20196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:34:01.666308   20196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:34:01.762184   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.857672   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:34:01.871507   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:01.882955   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:01.972213   20196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:34:02.036955   20196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:34:02.037050   20196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:34:02.042796   20196 start.go:563] Will wait 60s for crictl version
	I1204 15:34:02.042875   20196 ssh_runner.go:195] Run: which crictl
	I1204 15:34:02.046431   20196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:34:02.073232   20196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 15:34:02.073324   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.089702   20196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:02.126985   20196 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 15:34:02.168586   20196 out.go:177]   - env NO_PROXY=192.169.0.5
	I1204 15:34:02.190567   20196 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1204 15:34:02.211577   20196 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:34:02.211977   20196 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1204 15:34:02.216597   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.226113   20196 mustload.go:65] Loading cluster: ha-098000
	I1204 15:34:02.226314   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:02.226550   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.226577   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.238043   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58675
	I1204 15:34:02.238357   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.238749   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.238766   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.238998   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.239102   20196 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:34:02.239217   20196 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:02.239287   20196 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:34:02.240505   20196 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:34:02.240770   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:02.240796   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:02.252028   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58677
	I1204 15:34:02.252346   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:02.252700   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:02.252719   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:02.252937   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:02.253032   20196 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:34:02.253139   20196 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000 for IP: 192.169.0.7
	I1204 15:34:02.253146   20196 certs.go:194] generating shared ca certs ...
	I1204 15:34:02.253156   20196 certs.go:226] acquiring lock for ca certs: {Name:mk72c221ce3b7935966dd397ce28a59c2cdb859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:02.253308   20196 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key
	I1204 15:34:02.253362   20196 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key
	I1204 15:34:02.253371   20196 certs.go:256] generating profile certs ...
	I1204 15:34:02.253468   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key
	I1204 15:34:02.253856   20196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key.d946d3b4
	I1204 15:34:02.253925   20196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key
	I1204 15:34:02.253938   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1204 15:34:02.253962   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1204 15:34:02.253983   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1204 15:34:02.254009   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1204 15:34:02.254028   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1204 15:34:02.254046   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1204 15:34:02.254065   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1204 15:34:02.254082   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1204 15:34:02.254159   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem (1338 bytes)
	W1204 15:34:02.254203   20196 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821_empty.pem, impossibly tiny 0 bytes
	I1204 15:34:02.254211   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:34:02.254246   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem (1082 bytes)
	I1204 15:34:02.254278   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:34:02.254310   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem (1679 bytes)
	I1204 15:34:02.254374   20196 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:34:02.254409   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.254429   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem -> /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.254447   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.254475   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:34:02.254562   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:34:02.254640   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:34:02.254716   20196 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:34:02.254794   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:34:02.285982   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1204 15:34:02.289453   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1204 15:34:02.298834   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1204 15:34:02.302369   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1204 15:34:02.315418   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1204 15:34:02.318593   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1204 15:34:02.327312   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1204 15:34:02.330564   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1204 15:34:02.339456   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1204 15:34:02.342515   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1204 15:34:02.351231   20196 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1204 15:34:02.354286   20196 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1204 15:34:02.363156   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:34:02.384838   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:34:02.405926   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:34:02.426535   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:34:02.446742   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1204 15:34:02.466560   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:34:02.486853   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:34:02.507184   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:34:02.528073   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:34:02.548964   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/17821.pem --> /usr/share/ca-certificates/17821.pem (1338 bytes)
	I1204 15:34:02.569347   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /usr/share/ca-certificates/178212.pem (1708 bytes)
	I1204 15:34:02.589426   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1204 15:34:02.603866   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1204 15:34:02.617657   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1204 15:34:02.631813   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1204 15:34:02.645494   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1204 15:34:02.659961   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1204 15:34:02.673777   20196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1204 15:34:02.687446   20196 ssh_runner.go:195] Run: openssl version
	I1204 15:34:02.691739   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:34:02.700420   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.703973   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:13 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.704042   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:02.708497   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:34:02.717646   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17821.pem && ln -fs /usr/share/ca-certificates/17821.pem /etc/ssl/certs/17821.pem"
	I1204 15:34:02.726542   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.729989   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.730041   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17821.pem
	I1204 15:34:02.734277   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17821.pem /etc/ssl/certs/51391683.0"
	I1204 15:34:02.742686   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178212.pem && ln -fs /usr/share/ca-certificates/178212.pem /etc/ssl/certs/178212.pem"
	I1204 15:34:02.751027   20196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754461   20196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.754515   20196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178212.pem
	I1204 15:34:02.758843   20196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178212.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:34:02.767465   20196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:34:02.770903   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:34:02.776086   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:34:02.780679   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:34:02.785121   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:34:02.789654   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:34:02.794116   20196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:34:02.798756   20196 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1204 15:34:02.798834   20196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-098000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-098000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:34:02.798851   20196 kube-vip.go:115] generating kube-vip config ...
	I1204 15:34:02.798902   20196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1204 15:34:02.811676   20196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1204 15:34:02.811716   20196 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1204 15:34:02.811802   20196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 15:34:02.820056   20196 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:34:02.820120   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1204 15:34:02.827634   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1204 15:34:02.840903   20196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:34:02.854283   20196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1204 15:34:02.867957   20196 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1204 15:34:02.870915   20196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:34:02.880410   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:02.978715   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:02.992761   20196 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:34:02.992956   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:03.013320   20196 out.go:177] * Verifying Kubernetes components...
	I1204 15:34:03.055094   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:03.162591   20196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:03.175308   20196 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:34:03.175517   20196 kapi.go:59] client config for ha-098000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-17258/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe220d60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1204 15:34:03.175556   20196 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1204 15:34:03.175722   20196 node_ready.go:35] waiting up to 6m0s for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.175774   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:03.175780   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.175788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.175793   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.177877   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.178182   20196 node_ready.go:49] node "ha-098000-m03" has status "Ready":"True"
	I1204 15:34:03.178191   20196 node_ready.go:38] duration metric: took 2.460684ms for node "ha-098000-m03" to be "Ready" ...
	I1204 15:34:03.178204   20196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:03.178249   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:03.178255   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.178261   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.178265   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.181589   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:03.187858   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:03.187917   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.187923   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.187928   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.187931   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.190071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.190536   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.190544   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.190550   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.190553   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.192357   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:03.689890   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:03.689913   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.689960   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.689970   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.692722   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:03.693137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:03.693145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:03.693150   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:03.693154   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:03.694862   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.188595   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.188612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.188618   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.188622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.190926   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.191442   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.191451   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.191457   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.191460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.193377   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:04.689410   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:04.689427   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.689433   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.689436   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.691829   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:04.692311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:04.692320   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:04.692326   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:04.692329   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:04.694756   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.188051   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.188069   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.188075   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.188079   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.190537   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.191234   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.191244   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.191250   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.191254   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.193184   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:05.193754   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:05.689571   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:05.689583   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.689589   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.689592   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.692119   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:05.693045   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:05.693054   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:05.693060   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:05.693070   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:05.695078   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.188182   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.188196   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.188203   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.188206   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.190803   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:06.191335   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.191343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.191353   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.191358   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.193354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:06.688125   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:06.688144   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.688150   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.688153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.698567   20196 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1204 15:34:06.699659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:06.699669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:06.699674   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:06.699678   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:06.702231   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.188129   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.188142   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.188149   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.188152   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.190314   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:07.190783   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.190793   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.190799   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.190803   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.192721   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.689429   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:07.689444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.689450   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.689453   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.691383   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.691809   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:07.691816   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:07.691822   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:07.691827   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:07.693593   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:07.693894   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:08.189338   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.189353   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.189361   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.189365   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.191565   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.192110   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.192118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.192124   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.192134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.193879   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:08.689140   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:08.689155   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.689194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.689198   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.691672   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:08.692190   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:08.692197   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:08.692203   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:08.692206   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:08.694257   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.189377   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.189396   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.189399   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.191765   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.192318   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.192326   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.192333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.192337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.194226   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:09.688422   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:09.688435   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.688441   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.688445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.690918   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:09.691538   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:09.691546   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:09.691552   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:09.691556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:09.693405   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.188400   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.188426   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.188438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.188445   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.191226   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.191923   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.191930   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.191936   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.191940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.193682   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:10.194054   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:10.689544   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:10.689566   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.689601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.689607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.692171   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:10.692830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:10.692842   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:10.692848   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:10.692852   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:10.694354   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:11.188970   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.188983   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.188989   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.188992   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.193348   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:11.193835   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.193844   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.193850   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.193854   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.195899   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.688737   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:11.688752   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.688758   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.688761   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691007   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:11.691483   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:11.691491   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:11.691496   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:11.691500   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:11.693198   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.188889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.188972   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.188986   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.188999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.192039   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:12.192581   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.192589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.192595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.192598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.194300   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:12.194673   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:12.688761   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:12.688869   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.688880   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.688888   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.691475   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:12.692022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:12.692029   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:12.692035   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:12.692039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:12.693737   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.190399   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.190424   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.190436   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.190441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.193795   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:13.194709   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.194717   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.194722   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.194725   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.196228   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.688349   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:13.688361   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.688367   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.688370   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.690278   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:13.690775   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:13.690783   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:13.690788   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:13.690792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:13.692350   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.189443   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.189461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.189470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.189474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.191713   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:14.192328   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.192336   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.192341   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.192345   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.194132   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.689369   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:14.689471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.689487   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.689522   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.693058   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:14.693755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:14.693762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:14.693768   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:14.693771   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:14.695478   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:14.695986   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:15.189753   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.189777   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.189833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.189842   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.193300   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:15.193825   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.193835   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.193842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.193848   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.195490   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:15.688564   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:15.688589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.688600   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.688607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.691559   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:15.692137   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:15.692145   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:15.692152   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:15.692156   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:15.693792   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.188974   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.188991   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.188999   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.189003   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.191876   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.192266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.192273   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.192279   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.192283   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.193909   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:16.689589   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:16.689601   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.689607   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.689609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.691735   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:16.692340   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:16.692348   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:16.692354   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:16.692364   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:16.694139   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.188693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.188719   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.188730   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.188737   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.192306   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.192880   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.192888   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.192893   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.192896   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.194607   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:17.194930   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:17.689803   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:17.689822   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.689833   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.689840   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.692900   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:17.693582   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:17.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:17.693596   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:17.693600   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:17.695568   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.189872   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.189891   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.189903   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.189909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.193143   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.193659   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.193669   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.193677   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.193682   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.195539   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:18.689089   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:18.689110   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.689121   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.689128   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.692465   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:18.693092   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:18.693099   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:18.693105   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:18.693109   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:18.694811   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.188836   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.188866   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.188885   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.188893   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191083   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:19.191481   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.191489   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.191494   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.191498   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.193210   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.688920   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:19.689019   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.689034   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.689040   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692204   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:19.692887   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:19.692895   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:19.692901   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:19.692905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:19.694482   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:19.694834   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:20.189463   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.189482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.189495   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.189507   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.192820   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.193489   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.193497   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.193503   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.193506   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.195170   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:20.689312   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:20.689335   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.689345   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.689353   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.692898   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:20.693406   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:20.693413   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:20.693419   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:20.693435   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:20.695237   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.189479   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.189499   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.189511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.189519   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.192490   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.193119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.193127   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.193132   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.193136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.194670   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:21.689574   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:21.689589   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.689595   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.689598   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.691684   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:21.692133   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:21.692140   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:21.692145   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:21.692156   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:21.694020   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.189311   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.189327   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.189334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.189337   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.191942   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.192424   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.192432   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.192438   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.192441   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.194080   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:22.194500   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:22.689269   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:22.689284   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.689293   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.689297   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.691724   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:22.692389   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:22.692397   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:22.692404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:22.692407   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:22.694417   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.188903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.188937   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.188944   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.188948   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.191281   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.191769   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.191776   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.191783   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.191786   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.193597   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:23.689658   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:23.689673   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.689682   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.689688   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.692154   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:23.692597   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:23.692605   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:23.692611   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:23.692614   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:23.694442   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.190414   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.190439   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.190448   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.190453   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.193694   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:24.194336   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.194343   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.194349   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.194352   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.196204   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:24.196507   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:24.689283   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:24.689324   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.689334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.689339   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.691786   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:24.692252   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:24.692260   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:24.692265   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:24.692269   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:24.694045   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.189972   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.189988   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.189995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.189997   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.192150   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.192590   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.192598   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.192604   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.192607   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.194554   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:25.689840   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:25.689893   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.689902   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.689908   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.692432   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:25.693530   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:25.693539   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:25.693545   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:25.693556   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:25.695085   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.188685   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.188774   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.188787   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.188792   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.191478   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.191981   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.191990   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.191995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.191998   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.193972   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.689955   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:26.690060   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.690076   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.690084   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693025   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:26.693583   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:26.693591   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:26.693596   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:26.693601   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:26.695193   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:26.695569   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:27.190057   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.190079   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.190096   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.190102   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.193105   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:27.193849   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.193860   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.193868   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.193873   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.195538   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:27.688758   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:27.688772   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.688779   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.688783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.694666   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:27.695270   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:27.695278   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:27.695283   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:27.695288   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:27.696913   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.188770   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.188819   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.188832   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.188840   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.191808   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.192403   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.192411   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.192416   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.192420   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.194136   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:28.689405   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:28.689487   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.689503   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.689511   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.694694   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:28.695230   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:28.695237   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:28.695243   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:28.695246   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:28.697820   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:28.698133   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:29.190106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.190125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.190138   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.190143   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.193071   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:29.193687   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.193698   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.193706   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.193711   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.195444   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:29.689830   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:29.689849   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.689862   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.689867   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.692977   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:29.693745   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:29.693753   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:29.693759   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:29.693762   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:29.695525   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.190945   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.190965   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.190976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.190988   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.195195   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:30.195850   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.195859   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.195865   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.195869   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.197592   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:30.689476   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:30.689500   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.689510   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.689516   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.692808   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:30.693458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:30.693466   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:30.693471   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:30.693474   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:30.695140   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.189274   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.189389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.189404   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.189413   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.192545   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.193168   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.193179   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.193186   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.193193   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.194805   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:31.195157   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:31.690066   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:31.690125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.690139   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.693489   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:31.694073   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:31.694084   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:31.694093   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:31.694098   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:31.695789   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.190294   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.190315   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.190333   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193258   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:32.193839   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.193846   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.193852   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.193856   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.195470   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:32.689113   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:32.689137   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.689148   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.689153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.692269   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:32.692828   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:32.692836   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:32.692842   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:32.692845   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:32.694429   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.188950   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.188969   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.188980   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.188987   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.191891   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:33.192381   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.192389   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.192395   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.192400   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.194337   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.690112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:33.690134   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.690145   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.690153   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.693581   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:33.694215   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:33.694223   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:33.694229   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:33.694232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:33.696177   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:33.696454   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:34.189881   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.189900   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.189912   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.189918   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193287   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.193886   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.193897   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.193909   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.193915   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.195881   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:34.689892   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:34.689916   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.689931   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.689940   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.693606   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:34.694219   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:34.694227   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:34.694234   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:34.694237   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:34.696105   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.188973   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.189024   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.189039   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.189046   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192172   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.192755   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.192763   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.192769   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.192772   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.194518   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.690180   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:35.690201   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.690214   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.690223   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.694006   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:35.694605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:35.694612   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:35.694619   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:35.694622   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:35.696235   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:35.696565   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:36.189741   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.189767   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.189779   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.189785   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.193344   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.194036   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.194047   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.194055   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.194059   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.195836   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:36.690199   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:36.690224   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.690236   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.690241   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.693462   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:36.694091   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:36.694102   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:36.694110   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:36.694116   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:36.695766   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:37.190287   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.190309   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.190320   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.190326   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.196511   20196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1204 15:34:37.197043   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.197052   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.197058   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.197061   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.199818   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.690095   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:37.690118   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.690129   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.690136   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.693801   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:37.694618   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:37.694626   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:37.694632   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:37.694636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:37.696670   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:37.697007   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:38.190293   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.190317   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.190329   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.190338   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.194628   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:38.195183   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.195190   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.195196   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.195201   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.197386   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:38.689866   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:38.689889   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.689900   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.689905   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.693601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:38.694401   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:38.694412   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:38.694420   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:38.694426   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:38.696297   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:39.190990   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.191012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.191024   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.191031   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.198155   20196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1204 15:34:39.199473   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.199482   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.199488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.199493   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.205055   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:39.690106   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:39.690130   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.690142   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.690147   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.693615   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:39.694445   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:39.694452   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:39.694458   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:39.694462   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:39.696222   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.189693   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2z7lq
	I1204 15:34:40.189718   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.189731   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.189746   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.193370   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:40.194004   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.194012   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.194018   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.194021   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.195604   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.195934   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:40.195944   20196 pod_ready.go:82] duration metric: took 37.007028934s for pod "coredns-7c65d6cfc9-2z7lq" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195952   20196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:40.195984   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.195989   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.195995   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.195999   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.197711   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.198120   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.198128   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.198134   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.198136   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.199690   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:40.696200   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:40.696219   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.696228   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.696232   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.698719   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:40.699262   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:40.699270   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:40.699277   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:40.699281   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:40.701563   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.196423   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.196440   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.196446   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.196449   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.199972   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:41.200435   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.200444   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.200449   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.200454   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.202156   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:41.696302   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:41.696325   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.696334   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.696376   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.698859   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:41.699465   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:41.699474   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:41.699480   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:41.699486   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:41.701569   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.197903   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.197925   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.197937   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.197942   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.200867   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.201412   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.201420   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.201427   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.201431   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.203130   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.203467   20196 pod_ready.go:103] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"False"
	I1204 15:34:42.697162   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-75cm5
	I1204 15:34:42.697182   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.697194   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.697200   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.700051   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.700562   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.700570   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.700576   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.700579   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.702701   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:42.703063   20196 pod_ready.go:93] pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.703073   20196 pod_ready.go:82] duration metric: took 2.507044671s for pod "coredns-7c65d6cfc9-75cm5" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703080   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.703116   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000
	I1204 15:34:42.703121   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.703129   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.703134   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.705021   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.705585   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.705592   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.705598   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.705609   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.707581   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.708069   20196 pod_ready.go:93] pod "etcd-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.708079   20196 pod_ready.go:82] duration metric: took 4.993321ms for pod "etcd-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708086   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.708121   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m02
	I1204 15:34:42.708126   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.708131   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.708135   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.710061   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.710514   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:42.710522   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.710528   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.710532   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712173   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.712569   20196 pod_ready.go:93] pod "etcd-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.712578   20196 pod_ready.go:82] duration metric: took 4.485807ms for pod "etcd-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712584   20196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.712616   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-098000-m03
	I1204 15:34:42.712621   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.712627   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.712630   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714463   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.714960   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:42.714968   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.714976   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.714980   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.716756   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.717063   20196 pod_ready.go:93] pod "etcd-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.717072   20196 pod_ready.go:82] duration metric: took 4.482301ms for pod "etcd-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717082   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.717112   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000
	I1204 15:34:42.717116   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.717122   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.717126   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.718813   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.719178   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:42.719186   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.719192   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.719196   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.720739   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:42.721127   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:42.721135   20196 pod_ready.go:82] duration metric: took 4.047168ms for pod "kube-apiserver-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.721141   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:42.898812   20196 request.go:632] Waited for 177.546709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898865   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m02
	I1204 15:34:42.898875   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:42.898884   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:42.898890   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:42.901957   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.097426   20196 request.go:632] Waited for 194.940606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097482   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:43.097488   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.097494   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.097498   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.099791   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.100329   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.100338   20196 pod_ready.go:82] duration metric: took 379.181132ms for pod "kube-apiserver-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.100345   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.297467   20196 request.go:632] Waited for 197.060564ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297531   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-098000-m03
	I1204 15:34:43.297536   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.297542   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.297546   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.299888   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.498066   20196 request.go:632] Waited for 197.627847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498144   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:43.498154   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.498165   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.498171   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.501495   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.501933   20196 pod_ready.go:93] pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.501946   20196 pod_ready.go:82] duration metric: took 401.584296ms for pod "kube-apiserver-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.501955   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.697541   20196 request.go:632] Waited for 195.539974ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697609   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000
	I1204 15:34:43.697614   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.697620   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.697624   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.699660   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:43.897896   20196 request.go:632] Waited for 197.715706ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897988   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:43.897999   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:43.898011   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:43.898040   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:43.901116   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:43.901493   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:43.901504   20196 pod_ready.go:82] duration metric: took 399.531331ms for pod "kube-controller-manager-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:43.901511   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.097961   20196 request.go:632] Waited for 196.319346ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098022   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m02
	I1204 15:34:44.098031   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.098043   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.098052   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.101549   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.297607   20196 request.go:632] Waited for 195.557496ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297743   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:44.297756   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.297766   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.297776   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.301215   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.301821   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.301835   20196 pod_ready.go:82] duration metric: took 400.304316ms for pod "kube-controller-manager-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.301844   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.497418   20196 request.go:632] Waited for 195.52419ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497540   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-098000-m03
	I1204 15:34:44.497551   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.497561   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.497567   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.500605   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:44.697803   20196 request.go:632] Waited for 196.768583ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697874   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:44.697880   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.697886   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.697892   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.699791   20196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1204 15:34:44.700181   20196 pod_ready.go:93] pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:44.700191   20196 pod_ready.go:82] duration metric: took 398.331303ms for pod "kube-controller-manager-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.700206   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:44.897582   20196 request.go:632] Waited for 197.274481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897621   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dv6r
	I1204 15:34:44.897628   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:44.897636   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:44.897643   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:44.899968   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.098303   20196 request.go:632] Waited for 197.936546ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098458   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:45.098471   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.098481   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.098489   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.101906   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.102405   20196 pod_ready.go:93] pod "kube-proxy-8dv6r" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.102418   20196 pod_ready.go:82] duration metric: took 402.19463ms for pod "kube-proxy-8dv6r" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.102429   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.297787   20196 request.go:632] Waited for 195.298622ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297896   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9strn
	I1204 15:34:45.297908   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.297918   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.297924   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.301224   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.497743   20196 request.go:632] Waited for 195.731374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497789   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:45.497798   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.497808   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.497816   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.501296   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:45.501752   20196 pod_ready.go:93] pod "kube-proxy-9strn" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:45.501764   20196 pod_ready.go:82] duration metric: took 399.314475ms for pod "kube-proxy-9strn" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.501772   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:45.698321   20196 request.go:632] Waited for 196.486057ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698364   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mz4q2
	I1204 15:34:45.698368   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.698395   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.698400   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.700678   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.898338   20196 request.go:632] Waited for 197.154497ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898437   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m04
	I1204 15:34:45.898445   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:45.898454   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:45.898460   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:45.900811   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:45.901113   20196 pod_ready.go:98] node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901124   20196 pod_ready.go:82] duration metric: took 399.323564ms for pod "kube-proxy-mz4q2" in "kube-system" namespace to be "Ready" ...
	E1204 15:34:45.901130   20196 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-098000-m04" hosting pod "kube-proxy-mz4q2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-098000-m04" has status "Ready":"Unknown"
	I1204 15:34:45.901136   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.098348   20196 request.go:632] Waited for 197.16954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098411   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rf4cp
	I1204 15:34:46.098417   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.098423   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.098428   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.100807   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.298026   20196 request.go:632] Waited for 196.74762ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298086   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:46.298092   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.298098   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.298103   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.300358   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.300719   20196 pod_ready.go:93] pod "kube-proxy-rf4cp" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.300729   20196 pod_ready.go:82] duration metric: took 399.576022ms for pod "kube-proxy-rf4cp" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.300737   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.497896   20196 request.go:632] Waited for 197.086517ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.497983   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000
	I1204 15:34:46.498051   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.498063   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.498071   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.501601   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:46.698084   20196 request.go:632] Waited for 195.78543ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698119   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000
	I1204 15:34:46.698125   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.698170   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.698177   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.700251   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:46.700719   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:46.700729   20196 pod_ready.go:82] duration metric: took 399.975386ms for pod "kube-scheduler-ha-098000" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.700736   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:46.898629   20196 request.go:632] Waited for 197.83339ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898748   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m02
	I1204 15:34:46.898762   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:46.898773   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:46.898783   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:46.902413   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.099363   20196 request.go:632] Waited for 196.494986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099466   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m02
	I1204 15:34:47.099477   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.099488   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.099495   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.102564   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.102986   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.102995   20196 pod_ready.go:82] duration metric: took 402.242621ms for pod "kube-scheduler-ha-098000-m02" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.103002   20196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.297846   20196 request.go:632] Waited for 194.795128ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297889   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-098000-m03
	I1204 15:34:47.297939   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.297949   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.297953   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.300484   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.498216   20196 request.go:632] Waited for 197.302267ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498266   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-098000-m03
	I1204 15:34:47.498358   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.498374   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.498381   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.501722   20196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1204 15:34:47.502017   20196 pod_ready.go:93] pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace has status "Ready":"True"
	I1204 15:34:47.502028   20196 pod_ready.go:82] duration metric: took 399.008512ms for pod "kube-scheduler-ha-098000-m03" in "kube-system" namespace to be "Ready" ...
	I1204 15:34:47.502037   20196 pod_ready.go:39] duration metric: took 44.322579822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 15:34:47.502061   20196 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:34:47.502149   20196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:47.513881   20196 api_server.go:72] duration metric: took 44.519844285s to wait for apiserver process to appear ...
	I1204 15:34:47.513892   20196 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:34:47.513909   20196 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1204 15:34:47.516967   20196 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1204 15:34:47.517003   20196 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1204 15:34:47.517008   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.517014   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.517018   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.517533   20196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1204 15:34:47.517562   20196 api_server.go:141] control plane version: v1.31.2
	I1204 15:34:47.517569   20196 api_server.go:131] duration metric: took 3.673154ms to wait for apiserver health ...
	I1204 15:34:47.517575   20196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 15:34:47.697569   20196 request.go:632] Waited for 179.954091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697605   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:47.697611   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.697617   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.697621   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.702548   20196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1204 15:34:47.707779   20196 system_pods.go:59] 26 kube-system pods found
	I1204 15:34:47.707791   20196 system_pods.go:61] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:47.707795   20196 system_pods.go:61] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:47.707798   20196 system_pods.go:61] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:47.707801   20196 system_pods.go:61] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:47.707809   20196 system_pods.go:61] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:47.707813   20196 system_pods.go:61] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:47.707815   20196 system_pods.go:61] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:47.707818   20196 system_pods.go:61] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:47.707821   20196 system_pods.go:61] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:47.707823   20196 system_pods.go:61] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:47.707826   20196 system_pods.go:61] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:47.707830   20196 system_pods.go:61] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:47.707837   20196 system_pods.go:61] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:47.707841   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:47.707844   20196 system_pods.go:61] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:47.707846   20196 system_pods.go:61] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:47.707849   20196 system_pods.go:61] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:47.707851   20196 system_pods.go:61] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:47.707854   20196 system_pods.go:61] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:47.707857   20196 system_pods.go:61] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:47.707860   20196 system_pods.go:61] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:47.707862   20196 system_pods.go:61] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:47.707865   20196 system_pods.go:61] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:47.707867   20196 system_pods.go:61] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:47.707870   20196 system_pods.go:61] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:47.707874   20196 system_pods.go:61] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:47.707879   20196 system_pods.go:74] duration metric: took 190.294933ms to wait for pod list to return data ...
	I1204 15:34:47.707885   20196 default_sa.go:34] waiting for default service account to be created ...
	I1204 15:34:47.897357   20196 request.go:632] Waited for 189.411036ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897446   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1204 15:34:47.897455   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:47.897463   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:47.897470   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:47.899736   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:47.899815   20196 default_sa.go:45] found service account: "default"
	I1204 15:34:47.899824   20196 default_sa.go:55] duration metric: took 191.920936ms for default service account to be created ...
	I1204 15:34:47.899831   20196 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 15:34:48.097563   20196 request.go:632] Waited for 197.602094ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097612   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1204 15:34:48.097620   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.097663   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.097675   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.102765   20196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1204 15:34:48.109211   20196 system_pods.go:86] 26 kube-system pods found
	I1204 15:34:48.109362   20196 system_pods.go:89] "coredns-7c65d6cfc9-2z7lq" [7e1e544e-4664-4d4f-b739-138f16245205] Running
	I1204 15:34:48.109371   20196 system_pods.go:89] "coredns-7c65d6cfc9-75cm5" [1b5dc783-9820-4da2-8708-6942aad8d7b4] Running
	I1204 15:34:48.109375   20196 system_pods.go:89] "etcd-ha-098000" [5fb3d656-914c-4b5d-88b2-45a263e5c0f5] Running
	I1204 15:34:48.109379   20196 system_pods.go:89] "etcd-ha-098000-m02" [0db72259-8d1a-42d9-8932-9347010f9928] Running
	I1204 15:34:48.109383   20196 system_pods.go:89] "etcd-ha-098000-m03" [9d4fb91f-3910-45c7-99a0-b792e5abdc18] Running
	I1204 15:34:48.109386   20196 system_pods.go:89] "kindnet-bktcq" [5ff5e29d-8bdb-492f-8be8-65295fb7d83f] Running
	I1204 15:34:48.109389   20196 system_pods.go:89] "kindnet-c9zw7" [89986797-2cf2-4a40-8fbf-f765272e3a0b] Running
	I1204 15:34:48.109393   20196 system_pods.go:89] "kindnet-cbqbd" [6bb3b1cc-90bf-4edd-8b90-2d2858a589df] Running
	I1204 15:34:48.109396   20196 system_pods.go:89] "kindnet-w7mbs" [ea012267-3bcf-4aaf-8fdb-eec20c54705f] Running
	I1204 15:34:48.109400   20196 system_pods.go:89] "kube-apiserver-ha-098000" [3682c1da-fa90-4eb2-b638-08e672ac42ca] Running
	I1204 15:34:48.109403   20196 system_pods.go:89] "kube-apiserver-ha-098000-m02" [cf34ac88-6a45-45d4-a5ba-bf292269408d] Running
	I1204 15:34:48.109406   20196 system_pods.go:89] "kube-apiserver-ha-098000-m03" [20252e01-5eb5-4fd0-b69a-970e1e1f21b4] Running
	I1204 15:34:48.109409   20196 system_pods.go:89] "kube-controller-manager-ha-098000" [80d5ef25-9082-4b0a-b6bb-436abe4db170] Running
	I1204 15:34:48.109413   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m02" [2492885a-9c90-4f7c-acfa-abdfa1a701b5] Running
	I1204 15:34:48.109417   20196 system_pods.go:89] "kube-controller-manager-ha-098000-m03" [d5c63137-694d-4b77-ac43-6b6187416145] Running
	I1204 15:34:48.109419   20196 system_pods.go:89] "kube-proxy-8dv6r" [ead0d485-0b06-4e5e-9fae-62dc4a0e3ef4] Running
	I1204 15:34:48.109422   20196 system_pods.go:89] "kube-proxy-9strn" [c31f2e7c-666e-4301-8b05-47dc64eed217] Running
	I1204 15:34:48.109425   20196 system_pods.go:89] "kube-proxy-mz4q2" [a4a3a68c-87d6-4b99-91f4-cdf21d8a22f9] Running
	I1204 15:34:48.109428   20196 system_pods.go:89] "kube-proxy-rf4cp" [757021b4-d317-4b14-a2bb-f94775dabf19] Running
	I1204 15:34:48.109431   20196 system_pods.go:89] "kube-scheduler-ha-098000" [f68bfdba-0475-4102-bfb8-5928f3570d5c] Running
	I1204 15:34:48.109434   20196 system_pods.go:89] "kube-scheduler-ha-098000-m02" [3b5c12d7-664a-4412-8ab3-8b8e227a42d8] Running
	I1204 15:34:48.109437   20196 system_pods.go:89] "kube-scheduler-ha-098000-m03" [69810271-dc1c-41d7-83bc-a508ded618af] Running
	I1204 15:34:48.109439   20196 system_pods.go:89] "kube-vip-ha-098000" [e04c72cd-f983-42ad-b97f-eeff7a988de3] Running
	I1204 15:34:48.109442   20196 system_pods.go:89] "kube-vip-ha-098000-m02" [4cc83d5a-dec9-4a48-8d9a-0791c9b70753] Running
	I1204 15:34:48.109445   20196 system_pods.go:89] "kube-vip-ha-098000-m03" [3aa8346a-09fe-460f-9d1c-bef658af5323] Running
	I1204 15:34:48.109450   20196 system_pods.go:89] "storage-provisioner" [f7564fc1-72eb-47fc-a159-c6463cf27fb3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 15:34:48.109455   20196 system_pods.go:126] duration metric: took 209.614349ms to wait for k8s-apps to be running ...
	I1204 15:34:48.109461   20196 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 15:34:48.109531   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:34:48.120276   20196 system_svc.go:56] duration metric: took 10.810365ms WaitForService to wait for kubelet
	I1204 15:34:48.120291   20196 kubeadm.go:582] duration metric: took 45.126238068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:34:48.120303   20196 node_conditions.go:102] verifying NodePressure condition ...
	I1204 15:34:48.297415   20196 request.go:632] Waited for 177.05913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297455   20196 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1204 15:34:48.297461   20196 round_trippers.go:469] Request Headers:
	I1204 15:34:48.297469   20196 round_trippers.go:473]     Accept: application/json, */*
	I1204 15:34:48.297475   20196 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1204 15:34:48.300123   20196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1204 15:34:48.300830   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300840   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300847   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300850   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300853   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300856   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300860   20196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 15:34:48.300862   20196 node_conditions.go:123] node cpu capacity is 2
	I1204 15:34:48.300866   20196 node_conditions.go:105] duration metric: took 180.554037ms to run NodePressure ...
	I1204 15:34:48.300874   20196 start.go:241] waiting for startup goroutines ...
	I1204 15:34:48.300889   20196 start.go:255] writing updated cluster config ...
	I1204 15:34:48.322431   20196 out.go:201] 
	I1204 15:34:48.344449   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:34:48.344580   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.367119   20196 out.go:177] * Starting "ha-098000-m04" worker node in "ha-098000" cluster
	I1204 15:34:48.409090   20196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:34:48.409115   20196 cache.go:56] Caching tarball of preloaded images
	I1204 15:34:48.409244   20196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1204 15:34:48.409257   20196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:34:48.409347   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.410058   20196 start.go:360] acquireMachinesLock for ha-098000-m04: {Name:mk5732d0977303b287a6334fd12d5e58dfaa7fa7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:34:48.410126   20196 start.go:364] duration metric: took 51.472µs to acquireMachinesLock for "ha-098000-m04"
	I1204 15:34:48.410144   20196 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:34:48.410150   20196 fix.go:54] fixHost starting: m04
	I1204 15:34:48.410455   20196 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:34:48.410480   20196 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:34:48.421860   20196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58681
	I1204 15:34:48.422147   20196 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:34:48.422522   20196 main.go:141] libmachine: Using API Version  1
	I1204 15:34:48.422541   20196 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:34:48.422736   20196 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:34:48.422817   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.422956   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:34:48.423067   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.423135   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 19762
	I1204 15:34:48.424293   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid 19762 missing from process table
	I1204 15:34:48.424344   20196 fix.go:112] recreateIfNeeded on ha-098000-m04: state=Stopped err=<nil>
	I1204 15:34:48.424356   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:34:48.424441   20196 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:34:48.445040   20196 out.go:177] * Restarting existing hyperkit VM for "ha-098000-m04" ...
	I1204 15:34:48.535157   20196 main.go:141] libmachine: (ha-098000-m04) Calling .Start
	I1204 15:34:48.535373   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.535405   20196 main.go:141] libmachine: (ha-098000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid
	I1204 15:34:48.535476   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Using UUID 8502617a-13a7-430f-a6ae-7be776245ae1
	I1204 15:34:48.565169   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Generated MAC 7a:59:49:d0:f8:66
	I1204 15:34:48.565217   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000
	I1204 15:34:48.565376   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565411   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8502617a-13a7-430f-a6ae-7be776245ae1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002fec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1204 15:34:48.565471   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8502617a-13a7-430f-a6ae-7be776245ae1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/
machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"}
	I1204 15:34:48.565528   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8502617a-13a7-430f-a6ae-7be776245ae1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/ha-098000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/tty,log=/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/bzimage,/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-098000"
	I1204 15:34:48.565552   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1204 15:34:48.566902   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 DEBUG: hyperkit: Pid is 20252
	I1204 15:34:48.567481   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Attempt 0
	I1204 15:34:48.567496   20196 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:34:48.567619   20196 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:34:48.570453   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Searching for 7a:59:49:d0:f8:66 in /var/db/dhcpd_leases ...
	I1204 15:34:48.570536   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1204 15:34:48.570551   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:56:f8:e7:bc:e7:07 ID:1,56:f8:e7:bc:e7:7 Lease:0x6750f4f2}
	I1204 15:34:48.570574   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:39:f5:23:0b:32 ID:1,b2:39:f5:23:b:32 Lease:0x6750f4d1}
	I1204 15:34:48.570588   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:46:3b:47:9c:31:41 ID:1,46:3b:47:9c:31:41 Lease:0x6750f4bf}
	I1204 15:34:48.570605   20196 main.go:141] libmachine: (ha-098000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:59:49:d0:f8:66 ID:1,7a:59:49:d0:f8:66 Lease:0x6750e68b}
	I1204 15:34:48.570615   20196 main.go:141] libmachine: (ha-098000-m04) DBG | Found match: 7a:59:49:d0:f8:66
	I1204 15:34:48.570625   20196 main.go:141] libmachine: (ha-098000-m04) DBG | IP: 192.169.0.8
	I1204 15:34:48.570635   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetConfigRaw
	I1204 15:34:48.571737   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:48.571957   20196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/ha-098000/config.json ...
	I1204 15:34:48.572535   20196 machine.go:93] provisionDockerMachine start ...
	I1204 15:34:48.572555   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:34:48.572720   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:48.572824   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:48.572944   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573100   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:48.573236   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:48.573428   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:48.573574   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:48.573582   20196 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:34:48.578618   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1204 15:34:48.587514   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1204 15:34:48.588773   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:48.588818   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:48.588867   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:48.588887   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.021227   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1204 15:34:49.021251   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1204 15:34:49.136078   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1204 15:34:49.136099   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1204 15:34:49.136106   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1204 15:34:49.136115   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1204 15:34:49.136921   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1204 15:34:49.136930   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1204 15:34:54.890690   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1204 15:34:54.890729   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1204 15:34:54.890737   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1204 15:34:54.916069   20196 main.go:141] libmachine: (ha-098000-m04) DBG | 2024/12/04 15:34:54 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1204 15:34:59.632189   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:34:59.632205   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632363   20196 buildroot.go:166] provisioning hostname "ha-098000-m04"
	I1204 15:34:59.632375   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.632472   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.632554   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.632630   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632721   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.632816   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.633517   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.633682   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.633692   20196 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-098000-m04 && echo "ha-098000-m04" | sudo tee /etc/hostname
	I1204 15:34:59.697622   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-098000-m04
	
	I1204 15:34:59.697639   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.697775   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:34:59.697886   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.697981   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:34:59.698057   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:34:59.698172   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:59.698298   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:34:59.698309   20196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-098000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-098000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-098000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:34:59.757369   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:59.757388   20196 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-17258/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-17258/.minikube}
	I1204 15:34:59.757401   20196 buildroot.go:174] setting up certificates
	I1204 15:34:59.757413   20196 provision.go:84] configureAuth start
	I1204 15:34:59.757421   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetMachineName
	I1204 15:34:59.757593   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:34:59.757706   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:34:59.757790   20196 provision.go:143] copyHostCerts
	I1204 15:34:59.757821   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.757873   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem, removing ...
	I1204 15:34:59.757878   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem
	I1204 15:34:59.758004   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/cert.pem (1123 bytes)
	I1204 15:34:59.758235   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758271   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem, removing ...
	I1204 15:34:59.758277   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem
	I1204 15:34:59.758377   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/key.pem (1679 bytes)
	I1204 15:34:59.758555   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758595   20196 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem, removing ...
	I1204 15:34:59.758601   20196 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem
	I1204 15:34:59.758673   20196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-17258/.minikube/ca.pem (1082 bytes)
	I1204 15:34:59.758840   20196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca-key.pem org=jenkins.ha-098000-m04 san=[127.0.0.1 192.169.0.8 ha-098000-m04 localhost minikube]
	I1204 15:35:00.089781   20196 provision.go:177] copyRemoteCerts
	I1204 15:35:00.090065   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:35:00.090090   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.090250   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.090364   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.090440   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.090527   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:00.124202   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1204 15:35:00.124273   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 15:35:00.161213   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1204 15:35:00.161289   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 15:35:00.180684   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1204 15:35:00.180757   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 15:35:00.200255   20196 provision.go:87] duration metric: took 442.820652ms to configureAuth
	I1204 15:35:00.200272   20196 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:35:00.201095   20196 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:35:00.201110   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:00.201255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.201346   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.201433   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201525   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.201613   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.201739   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.201862   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.201869   20196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:35:00.254941   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:35:00.254954   20196 buildroot.go:70] root file system type: tmpfs
	I1204 15:35:00.255043   20196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:35:00.255055   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.255192   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.255284   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255363   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.255444   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.255591   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.255723   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.255769   20196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:35:00.320168   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:35:00.320186   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:00.320331   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:00.320425   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320520   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:00.320607   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:00.320759   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:00.320905   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:00.320920   20196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:35:01.894648   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:35:01.894665   20196 machine.go:96] duration metric: took 13.321743335s to provisionDockerMachine
	I1204 15:35:01.894674   20196 start.go:293] postStartSetup for "ha-098000-m04" (driver="hyperkit")
	I1204 15:35:01.894686   20196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:35:01.894699   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.894901   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:35:01.894920   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.895018   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.895119   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.895219   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.895309   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.930531   20196 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:35:01.933734   20196 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 15:35:01.933745   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/addons for local assets ...
	I1204 15:35:01.933830   20196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-17258/.minikube/files for local assets ...
	I1204 15:35:01.934221   20196 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> 178212.pem in /etc/ssl/certs
	I1204 15:35:01.934229   20196 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem -> /etc/ssl/certs/178212.pem
	I1204 15:35:01.934400   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:35:01.942635   20196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/ssl/certs/178212.pem --> /etc/ssl/certs/178212.pem (1708 bytes)
	I1204 15:35:01.962080   20196 start.go:296] duration metric: took 67.394691ms for postStartSetup
	I1204 15:35:01.962104   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:01.962295   20196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1204 15:35:01.962307   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:01.962392   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:01.962474   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:01.962566   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:01.962648   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:01.996347   20196 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1204 15:35:01.996427   20196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1204 15:35:02.030032   20196 fix.go:56] duration metric: took 13.619496662s for fixHost
	I1204 15:35:02.030058   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.030197   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.030296   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030393   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.030479   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.030637   20196 main.go:141] libmachine: Using SSH client type: native
	I1204 15:35:02.030806   20196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc67c100] 0xc67ede0 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1204 15:35:02.030817   20196 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:35:02.085147   20196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355302.120673328
	
	I1204 15:35:02.085159   20196 fix.go:216] guest clock: 1733355302.120673328
	I1204 15:35:02.085164   20196 fix.go:229] Guest: 2024-12-04 15:35:02.120673328 -0800 PST Remote: 2024-12-04 15:35:02.030047 -0800 PST m=+128.947170547 (delta=90.626328ms)
	I1204 15:35:02.085182   20196 fix.go:200] guest clock delta is within tolerance: 90.626328ms
	I1204 15:35:02.085188   20196 start.go:83] releasing machines lock for "ha-098000-m04", held for 13.674670433s
	I1204 15:35:02.085206   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.085349   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:35:02.123833   20196 out.go:177] * Found network options:
	I1204 15:35:02.144638   20196 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W1204 15:35:02.165506   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165534   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.165554   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.165573   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166172   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:35:02.166326   20196 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	W1204 15:35:02.166492   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166507   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	W1204 15:35:02.166517   20196 proxy.go:119] fail to check proxy env: Error ip not in block
	I1204 15:35:02.166609   20196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 15:35:02.166623   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.166758   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.166911   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167036   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167085   20196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:35:02.167112   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:35:02.167158   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:35:02.167255   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:35:02.167389   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:35:02.167508   20196 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:35:02.167638   20196 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	W1204 15:35:02.202034   20196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:35:02.202111   20196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 15:35:02.250167   20196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:35:02.250181   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.250263   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.264522   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 15:35:02.273699   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:35:02.283110   20196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.283199   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:35:02.292318   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.301397   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:35:02.310459   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:35:02.319592   20196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:35:02.328805   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:35:02.338084   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:35:02.347336   20196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:35:02.356538   20196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:35:02.364640   20196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1204 15:35:02.364708   20196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1204 15:35:02.374467   20196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:35:02.382987   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.482753   20196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:35:02.500374   20196 start.go:495] detecting cgroup driver to use...
	I1204 15:35:02.500464   20196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:35:02.521212   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.537841   20196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:35:02.556887   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:35:02.568330   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.579634   20196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:35:02.599962   20196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:35:02.611341   20196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:35:02.627983   20196 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:35:02.630940   20196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:35:02.638934   20196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 15:35:02.652587   20196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:35:02.752578   20196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:35:02.855546   20196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:35:02.855575   20196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:35:02.869623   20196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:35:02.966924   20196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:36:03.915497   20196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.946841873s)
	I1204 15:36:03.916405   20196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1204 15:36:03.950956   20196 out.go:201] 
	W1204 15:36:03.971878   20196 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:35:00 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640232708Z" level=info msg="Starting up"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.640913001Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:35:00 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:00.641520029Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.659694182Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677007859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677106781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677181167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677217787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677508761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677564998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677718553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677761182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677794548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677829672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.677979478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.678361377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.679991465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680045979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680192561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680239332Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680562445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.680612744Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684019168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684126285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684179264Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684280902Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684315598Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684384845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684662040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684780718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684823731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684856490Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684888664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684919549Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684954923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.684987161Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685018887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685064260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685101516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685133834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685178048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685213190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685243893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685277956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685310825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685342262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685371807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685438293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685477655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685510785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685541139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685570835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685612124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685654983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685694239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685725951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685757256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685828769Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.685873022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686013280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686053930Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686084541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686114731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686150092Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686396292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686486749Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686550930Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:35:00 ha-098000-m04 dockerd[498]: time="2024-12-04T23:35:00.686589142Z" level=info msg="containerd successfully booted in 0.028291s"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.663269012Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.685002759Z" level=info msg="Loading containers: start."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.779781751Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.847897599Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.892577077Z" level=info msg="Loading containers: done."
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902420090Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902480737Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902498001Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.902856617Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925683807Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:35:01 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:01.925904543Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:35:01 ha-098000-m04 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.029030705Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.030916905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031062918Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031129826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:35:03 ha-098000-m04 dockerd[491]: time="2024-12-04T23:35:03.031209544Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:35:03 ha-098000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:35:04 ha-098000-m04 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:35:04 ha-098000-m04 dockerd[1154]: time="2024-12-04T23:35:04.084800926Z" level=info msg="Starting up"
	Dec 04 23:36:04 ha-098000-m04 dockerd[1154]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:36:04 ha-098000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1204 15:36:03.971971   20196 out.go:270] * 
	W1204 15:36:03.973111   20196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:04.052589   20196 out.go:201] 
	
	
	==> Docker <==
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485304530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485378415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485392381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:08 ha-098000 dockerd[1158]: time="2024-12-04T23:34:08.485468152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488796991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.488864892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489011132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.489159283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505144687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505534782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.505591473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:09 ha-098000 dockerd[1158]: time="2024-12-04T23:34:09.506131239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584088576Z" level=info msg="shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584482016Z" level=warning msg="cleaning up after shim disconnected" id=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1151]: time="2024-12-04T23:34:37.584644745Z" level=info msg="ignoring event" container=59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.584822280Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 04 23:34:37 ha-098000 dockerd[1158]: time="2024-12-04T23:34:37.596833687Z" level=warning msg="cleanup warnings time=\"2024-12-04T23:34:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.455018691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456263444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456323640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:34:39 ha-098000 dockerd[1158]: time="2024-12-04T23:34:39.456579989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463132287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463712435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.463780797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 23:36:08 ha-098000 dockerd[1158]: time="2024-12-04T23:36:08.464133475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5e57abd3c0726       6e38f40d628db                                                                                         13 seconds ago       Running             storage-provisioner       2                   85942c1ee0c48       storage-provisioner
	274afa9228625       c69fa2e9cbf5f                                                                                         About a minute ago   Running             coredns                   1                   566b4c12aa8e2       coredns-7c65d6cfc9-2z7lq
	9260f06aa6160       9ca7e41918271                                                                                         2 minutes ago        Running             kindnet-cni               1                   1784deace7582       kindnet-c9zw7
	3aa9f0074ad24       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   ee7fb852b0746       busybox-7dff88458-tkk5l
	a4f10e7a31b1e       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   9544aac6431ee       coredns-7c65d6cfc9-75cm5
	4d500c5582d7e       505d571f5fd56                                                                                         2 minutes ago        Running             kube-proxy                1                   e007c09acabae       kube-proxy-9strn
	59729ff8ece5d       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   85942c1ee0c48       storage-provisioner
	06090b0373c28       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   492043398c8f7       etcd-ha-098000
	832c9a15fccb2       847c7bc1a5418                                                                                         3 minutes ago        Running             kube-scheduler            1                   85cb9204adcbc       kube-scheduler-ha-098000
	28b6bc3009d9a       4b34defda8067                                                                                         3 minutes ago        Running             kube-vip                  0                   092f7a958b993       kube-vip-ha-098000
	3fbffe6ec740e       0486b6c53a1b5                                                                                         3 minutes ago        Running             kube-controller-manager   1                   d3d303d826e70       kube-controller-manager-ha-098000
	d11a51451327e       9499c9960544e                                                                                         3 minutes ago        Running             kube-apiserver            1                   2e4b3bead8edd       kube-apiserver-ha-098000
	91698004f45ac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   7e62e6836673c       busybox-7dff88458-tkk5l
	334347c0146ff       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   106dba456980c       coredns-7c65d6cfc9-75cm5
	d45b7ca2c321b       c69fa2e9cbf5f                                                                                         8 minutes ago        Exited              coredns                   0                   0af8351fa9e0d       coredns-7c65d6cfc9-2z7lq
	fdb9e4f5e8f3d       kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16              8 minutes ago        Exited              kindnet-cni               0                   9933ca421eee5       kindnet-c9zw7
	12aba82bb9eef       505d571f5fd56                                                                                         8 minutes ago        Exited              kube-proxy                0                   1d340d81fbfb5       kube-proxy-9strn
	542f42367b5c6       0486b6c53a1b5                                                                                         8 minutes ago        Exited              kube-controller-manager   0                   05f42a6061648       kube-controller-manager-ha-098000
	1a5a6b8eb38ec       847c7bc1a5418                                                                                         8 minutes ago        Exited              kube-scheduler            0                   08cbd5b0cfe57       kube-scheduler-ha-098000
	347bf5bfb2fe6       2e96e5913fc06                                                                                         8 minutes ago        Exited              etcd                      0                   b7d6e2da744bd       etcd-ha-098000
	671e22f525950       9499c9960544e                                                                                         8 minutes ago        Exited              kube-apiserver            0                   81c0cf31c7e46       kube-apiserver-ha-098000
	
	
	==> coredns [274afa922862] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44993 - 12483 "HINFO IN 5217430967915220008.4602414331418196309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026715635s
	
	
	==> coredns [334347c0146f] <==
	[INFO] 10.244.0.4:55981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000158655s
	[INFO] 10.244.0.4:42290 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098463s
	[INFO] 10.244.0.4:58242 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058466s
	[INFO] 10.244.0.4:37059 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090224s
	[INFO] 10.244.3.2:34052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150969s
	[INFO] 10.244.3.2:48314 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000048987s
	[INFO] 10.244.3.2:47597 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004272s
	[INFO] 10.244.3.2:43130 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042338s
	[INFO] 10.244.3.2:40288 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040487s
	[INFO] 10.244.1.2:41974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126535s
	[INFO] 10.244.0.4:46586 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136951s
	[INFO] 10.244.3.2:49834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132271s
	[INFO] 10.244.3.2:35105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007496s
	[INFO] 10.244.3.2:46872 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103202s
	[INFO] 10.244.3.2:51001 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043782s
	[INFO] 10.244.1.2:60852 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151622s
	[INFO] 10.244.1.2:45169 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00010811s
	[INFO] 10.244.0.4:50794 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179037s
	[INFO] 10.244.0.4:33885 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091089s
	[INFO] 10.244.0.4:59078 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080787s
	[INFO] 10.244.0.4:47967 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000331118s
	[INFO] 10.244.3.2:37401 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005625s
	[INFO] 10.244.3.2:58299 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.00008056s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a4f10e7a31b1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56042 - 51130 "HINFO IN 4860731135473207728.3302970177185641581. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.195382352s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[898011711]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30003ms):
	Trace[898011711]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[898011711]: [30.003839217s] [30.003839217s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[451941860]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.766) (total time: 30002ms):
	Trace[451941860]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:34:38.768)
	Trace[451941860]: [30.00227073s] [30.00227073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[957834387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Dec-2024 23:34:08.764) (total time: 30004ms):
	Trace[957834387]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (23:34:38.768)
	Trace[957834387]: [30.004945433s] [30.004945433s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d45b7ca2c321] <==
	[INFO] 10.244.1.2:58995 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.045573822s
	[INFO] 10.244.0.4:47628 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000074289s
	[INFO] 10.244.0.4:33651 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000655957s
	[INFO] 10.244.0.4:59923 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.000433816s
	[INFO] 10.244.1.2:47489 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094853s
	[INFO] 10.244.1.2:60918 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109867s
	[INFO] 10.244.0.4:58795 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102995s
	[INFO] 10.244.0.4:56882 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000100778s
	[INFO] 10.244.0.4:41069 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155649s
	[INFO] 10.244.0.4:47261 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005694s
	[INFO] 10.244.3.2:57069 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00065513s
	[INFO] 10.244.3.2:45549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000047282s
	[INFO] 10.244.3.2:44245 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103531s
	[INFO] 10.244.1.2:39311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122238s
	[INFO] 10.244.1.2:35593 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174598s
	[INFO] 10.244.1.2:45158 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007291s
	[INFO] 10.244.0.4:35211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106877s
	[INFO] 10.244.0.4:54591 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089769s
	[INFO] 10.244.0.4:59162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036611s
	[INFO] 10.244.1.2:49523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134823s
	[INFO] 10.244.1.2:54333 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139019s
	[INFO] 10.244.3.2:46351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095077s
	[INFO] 10.244.3.2:33059 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046925s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-098000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T15_27_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:27:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:27:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:49 +0000   Wed, 04 Dec 2024 23:28:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-098000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d2318e94e39401090f7022df3a380b0
	  System UUID:                70104c46-0000-0000-9279-8221d5ed18af
	  Boot ID:                    637a375b-a691-4a3e-8b6f-369766d12741
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tkk5l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-2z7lq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m32s
	  kube-system                 coredns-7c65d6cfc9-75cm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m32s
	  kube-system                 etcd-ha-098000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m39s
	  kube-system                 kindnet-c9zw7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m33s
	  kube-system                 kube-apiserver-ha-098000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-controller-manager-ha-098000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-9strn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-ha-098000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-vip-ha-098000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 8m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m43s (x8 over 8m44s)  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m43s (x8 over 8m44s)  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m43s (x7 over 8m44s)  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m37s                  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m37s                  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m37s                  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m37s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m34s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  NodeReady                8m13s                  kubelet          Node ha-098000 status is now: NodeReady
	  Normal  RegisteredNode           7m29s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  Starting                 3m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m11s (x8 over 3m11s)  kubelet          Node ha-098000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m11s (x8 over 3m11s)  kubelet          Node ha-098000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m11s (x7 over 3m11s)  kubelet          Node ha-098000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m41s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           2m40s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-098000 event: Registered Node ha-098000 in Controller
	
	
	Name:               ha-098000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_28_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:28:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:28:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:33:40 +0000   Wed, 04 Dec 2024 23:29:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-098000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 050a31912ec64c378c8000c9ffa16f74
	  System UUID:                2486449a-0000-0000-8055-5ee234f7d16f
	  Boot ID:                    90b90eed-fa44-41ea-9bc0-c9160a359639
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvhj6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 etcd-ha-098000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m35s
	  kube-system                 kindnet-w7mbs                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m38s
	  kube-system                 kube-apiserver-ha-098000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-controller-manager-ha-098000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-proxy-8dv6r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-scheduler-ha-098000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-vip-ha-098000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m33s                  kube-proxy       
	  Normal   Starting                 2m36s                  kube-proxy       
	  Normal   Starting                 4m12s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m38s (x8 over 7m38s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     7m38s (x7 over 7m38s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m38s (x8 over 7m38s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   CIDRAssignmentFailed     7m37s                  cidrAllocator    Node ha-098000-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           7m34s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           7m29s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           6m13s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 4m18s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m17s                  kubelet          Node ha-098000-m02 has been rebooted, boot id: 68d7d994-2a07-4139-8dc9-8d63e0527a5a
	  Normal   NodeHasSufficientMemory  4m17s                  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m17s                  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m17s                  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node ha-098000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node ha-098000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m41s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	  Normal   RegisteredNode           2m12s                  node-controller  Node ha-098000-m02 event: Registered Node ha-098000-m02 in Controller
	
	
	Name:               ha-098000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-098000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=ha-098000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_04T15_30_55_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:30:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-098000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:32:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 04 Dec 2024 23:31:25 +0000   Wed, 04 Dec 2024 23:34:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-098000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62de52f960740ecbed3bac1b9967c23
	  System UUID:                8502430f-0000-0000-a6ae-7be776245ae1
	  Boot ID:                    2c58ff3e-7f5d-436d-bc58-b646d91cdd24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bktcq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-proxy-mz4q2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x2 over 5m28s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientMemory
	  Normal  CIDRAssignmentFailed     5m28s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed     5m28s                  cidrAllocator    Node ha-098000-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m28s (x2 over 5m28s)  kubelet          Node ha-098000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m28s (x2 over 5m28s)  kubelet          Node ha-098000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeReady                5m5s                   kubelet          Node ha-098000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m41s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m40s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  RegisteredNode           2m12s                  node-controller  Node ha-098000-m04 event: Registered Node ha-098000-m04 in Controller
	  Normal  NodeNotReady             2m1s                   node-controller  Node ha-098000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.035548] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008017] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.831418] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000001] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec 4 23:33] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.189224] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.252070] systemd-fstab-generator[461]: Ignoring "noauto" option for root device
	[  +0.109203] systemd-fstab-generator[473]: Ignoring "noauto" option for root device
	[  +1.970887] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.250804] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.104275] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.059539] kauditd_printk_skb: 135 callbacks suppressed
	[  +0.050691] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.385557] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.100797] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.107482] systemd-fstab-generator[1382]: Ignoring "noauto" option for root device
	[  +0.131343] systemd-fstab-generator[1397]: Ignoring "noauto" option for root device
	[  +0.416585] systemd-fstab-generator[1555]: Ignoring "noauto" option for root device
	[  +6.808599] kauditd_printk_skb: 178 callbacks suppressed
	[ +34.877094] kauditd_printk_skb: 40 callbacks suppressed
	[Dec 4 23:34] kauditd_printk_skb: 20 callbacks suppressed
	[ +30.099102] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [06090b0373c2] <==
	{"level":"info","ts":"2024-12-04T23:34:04.808860Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-12-04T23:34:04.809180Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.809788Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817560Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.817933Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:34:04.822840Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"ba5f5cb2731bb4ee","stream-type":"stream Message"}
	{"level":"info","ts":"2024-12-04T23:34:04.823089Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.716587Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.169.0.7:58662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-12-04T23:36:12.725212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(5521112234287866227 13314548521573537860)"}
	{"level":"info","ts":"2024-12-04T23:36:12.726251Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"ba5f5cb2731bb4ee","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"info","ts":"2024-12-04T23:36:12.726295Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.726504Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726575Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.726698Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726773Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.726934Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727113Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","error":"context canceled"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727212Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ba5f5cb2731bb4ee","error":"failed to read ba5f5cb2731bb4ee on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-12-04T23:36:12.727230Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.727376Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-12-04T23:36:12.727445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.727457Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:36:12.727465Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.733801Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"ba5f5cb2731bb4ee"}
	{"level":"warn","ts":"2024-12-04T23:36:12.738397Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.169.0.7:59604","server-name":"","error":"read tcp 192.169.0.5:2380->192.169.0.7:59604: read: connection reset by peer"}
	
	
	==> etcd [347bf5bfb2fe] <==
	{"level":"info","ts":"2024-12-04T23:32:45.441636Z","caller":"traceutil/trace.go:171","msg":"trace[2067592358] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.930249208s","start":"2024-12-04T23:32:37.511384Z","end":"2024-12-04T23:32:45.441633Z","steps":["trace[2067592358] 'agreement among raft nodes before linearized reading'  (duration: 7.930237481s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:32:45.441645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:32:37.511358Z","time spent":"7.930284886s","remote":"127.0.0.1:53382","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/12/04 23:32:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-04T23:32:45.469417Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4c9eee5331caa173","rtt":"895.585µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.469450Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4c9eee5331caa173","rtt":"6.632061ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:32:45.474990Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T23:32:45.475061Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-12-04T23:32:45.477612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477653Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477794Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477828Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477881Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4c9eee5331caa173"}
	{"level":"info","ts":"2024-12-04T23:32:45.477896Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.477902Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478035Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478719Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478746Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478876Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.478921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ba5f5cb2731bb4ee"}
	{"level":"info","ts":"2024-12-04T23:32:45.484500Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484609Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-12-04T23:32:45.484618Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-098000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 23:36:22 up 3 min,  0 users,  load average: 0.05, 0.12, 0.06
	Linux ha-098000 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9260f06aa616] <==
	I1204 23:35:50.729785       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:35:50.730373       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:35:50.730499       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.728959       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:36:00.729028       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:36:00.729792       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:00.729847       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:00.730298       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:00.730349       1 main.go:301] handling current node
	I1204 23:36:00.730594       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:00.730746       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:36:10.720296       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:10.720596       1 main.go:301] handling current node
	I1204 23:36:10.720807       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:10.720933       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:36:10.721361       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:36:10.721422       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:36:10.721602       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:10.721699       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:36:20.721048       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:36:20.721238       1 main.go:301] handling current node
	I1204 23:36:20.721309       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:36:20.721330       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:36:20.721559       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:36:20.721644       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [fdb9e4f5e8f3] <==
	I1204 23:32:15.006102       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007657       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:25.007678       1 main.go:301] handling current node
	I1204 23:32:25.007687       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:25.007690       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:25.007809       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:25.007816       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:25.007864       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:25.007868       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:35.003703       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:35.003856       1 main.go:301] handling current node
	I1204 23:32:35.003925       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:35.004015       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:35.004440       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:35.004559       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:35.004793       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:35.004877       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	I1204 23:32:45.006980       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1204 23:32:45.007018       1 main.go:301] handling current node
	I1204 23:32:45.007028       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1204 23:32:45.007068       1 main.go:324] Node ha-098000-m02 has CIDR [10.244.1.0/24] 
	I1204 23:32:45.007194       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1204 23:32:45.007199       1 main.go:324] Node ha-098000-m03 has CIDR [10.244.3.0/24] 
	I1204 23:32:45.010702       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1204 23:32:45.010735       1 main.go:324] Node ha-098000-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [671e22f52595] <==
	W1204 23:32:45.463180       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463233       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463290       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1204 23:32:45.463996       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.465107       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465150       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465162       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465210       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465693       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.465858       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470160       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470650       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.470675       1 watcher.go:342] watch chan error: etcdserver: no leader
	W1204 23:32:45.470803       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E1204 23:32:45.471772       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471810       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471812       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471821       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471830       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471831       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471841       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471842       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471779       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471852       1 watcher.go:342] watch chan error: etcdserver: no leader
	E1204 23:32:45.471789       1 watcher.go:342] watch chan error: etcdserver: no leader
	
	
	==> kube-apiserver [d11a51451327] <==
	I1204 23:33:38.594218       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1204 23:33:38.687977       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1204 23:33:38.688296       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 23:33:38.691058       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1204 23:33:38.691545       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1204 23:33:38.691575       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1204 23:33:38.691653       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 23:33:38.692048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1204 23:33:38.694556       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1204 23:33:38.694668       1 aggregator.go:171] initial CRD sync complete...
	I1204 23:33:38.694729       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 23:33:38.694758       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 23:33:38.694764       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:33:38.696202       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 23:33:38.697593       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W1204 23:33:38.705769       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.7]
	I1204 23:33:38.717725       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1204 23:33:38.717774       1 policy_source.go:224] refreshing policies
	I1204 23:33:38.734833       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:33:38.808657       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:33:38.819037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1204 23:33:38.825290       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1204 23:33:39.595794       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1204 23:33:39.838860       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.7]
	W1204 23:33:59.841208       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5 192.169.0.6]
	
	
	==> kube-controller-manager [3fbffe6ec740] <==
	I1204 23:34:40.034377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.026µs"
	I1204 23:34:40.057548       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:40.057604       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:40.074632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.303024ms"
	I1204 23:34:40.074955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="276.731µs"
	I1204 23:34:42.740022       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gws4x\": the object has been modified; please apply your changes to the latest version and try again"
	I1204 23:34:42.740245       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c03d8180-947f-4c13-8442-c9080cad76d5", APIVersion:"v1", ResourceVersion:"295", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gws4x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gws4x": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:34:42.779484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.083905ms"
	I1204 23:34:42.779600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.609µs"
	I1204 23:36:09.380863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	I1204 23:36:09.398727       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	I1204 23:36:09.473781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.224147ms"
	I1204 23:36:09.514589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.272704ms"
	I1204 23:36:09.523247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.42842ms"
	I1204 23:36:09.523526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="239.36µs"
	I1204 23:36:11.564313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.243µs"
	I1204 23:36:11.792368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.371µs"
	I1204 23:36:11.800141       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="209.77µs"
	I1204 23:36:13.480525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m03"
	E1204 23:36:13.504069       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-098000-m03\", UID:\"1a2cb970-81cd-49bb-ae75-f6ed496ff60c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-098000-m03\", UID:\"189ed158-f416-4d5e-91e6-e148874a3aad\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-098000-m03\" not found" logger="UnhandledError"
	E1204 23:36:21.473858       1 gc_controller.go:151] "Failed to get node" err="node \"ha-098000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-098000-m03"
	E1204 23:36:21.473938       1 gc_controller.go:151] "Failed to get node" err="node \"ha-098000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-098000-m03"
	E1204 23:36:21.473944       1 gc_controller.go:151] "Failed to get node" err="node \"ha-098000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-098000-m03"
	E1204 23:36:21.473948       1 gc_controller.go:151] "Failed to get node" err="node \"ha-098000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-098000-m03"
	E1204 23:36:21.473951       1 gc_controller.go:151] "Failed to get node" err="node \"ha-098000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-098000-m03"
	
	
	==> kube-controller-manager [542f42367b5c] <==
	I1204 23:30:54.887539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	E1204 23:30:54.955494       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04" podCIDRs=["10.244.5.0/24"]
	E1204 23:30:54.955551       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-098000-m04"
	E1204 23:30:54.955659       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-098000-m04': failed to patch node CIDR: Node \"ha-098000-m04\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.5.0/24\", \"10.244.4.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1204 23:30:54.955704       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:54.963682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.113954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:55.398651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:58.480353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.039986       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-098000-m04"
	I1204 23:30:59.109649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.147948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:30:59.198931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:04.937478       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.609283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:17.610373       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-098000-m04"
	I1204 23:31:17.617825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:18.441772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:31:25.412764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m04"
	I1204 23:32:05.990296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-098000-m02"
	I1204 23:32:06.869398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.356069ms"
	I1204 23:32:06.870323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.454µs"
	I1204 23:32:09.287240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.687088ms"
	I1204 23:32:09.288363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="1.04846ms"
	
	
	==> kube-proxy [12aba82bb9ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:27:51.161946       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:27:51.171777       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:27:51.171971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:27:51.199877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:27:51.199962       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:27:51.199995       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:27:51.202350       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:27:51.202766       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:27:51.202823       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:27:51.204709       1 config.go:199] "Starting service config controller"
	I1204 23:27:51.205031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:27:51.205184       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:27:51.205227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:27:51.206547       1 config.go:328] "Starting node config controller"
	I1204 23:27:51.206855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:27:51.305717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:27:51.305831       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:27:51.307064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d500c5582d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 23:34:08.809079       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 23:34:08.830727       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1204 23:34:08.830876       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:34:08.863318       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 23:34:08.863364       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 23:34:08.863390       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:34:08.866204       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:34:08.866652       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:34:08.866681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:34:08.868711       1 config.go:199] "Starting service config controller"
	I1204 23:34:08.869077       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:34:08.869308       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:34:08.869337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:34:08.870512       1 config.go:328] "Starting node config controller"
	I1204 23:34:08.870544       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:34:08.970002       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:34:08.970040       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:34:08.970567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a5a6b8eb38e] <==
	I1204 23:27:45.983251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1204 23:30:28.192188       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	E1204 23:30:28.192284       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 051ac1c9-8f93-41a0-a61e-4bd649cbcde5(default/busybox-7dff88458-fvhj6) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-fvhj6"
	I1204 23:30:28.192310       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvhj6" node="ha-098000-m02"
	E1204 23:30:28.227861       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-rlnh2 is already present in the active queue" pod="default/busybox-7dff88458-rlnh2"
	E1204 23:30:54.897693       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vtbzp" node="ha-098000-m04"
	E1204 23:30:54.897853       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vtbzp\": pod kindnet-vtbzp is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-vtbzp"
	E1204 23:30:54.897931       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pdg7h" node="ha-098000-m04"
	E1204 23:30:54.897986       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pdg7h\": pod kube-proxy-pdg7h is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-pdg7h"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x7xvx" node="ha-098000-m04"
	E1204 23:30:54.935544       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x7xvx\": pod kindnet-x7xvx is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-x7xvx"
	E1204 23:30:54.936188       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.936258       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ff5e29d-8bdb-492f-8be8-65295fb7d83f(kube-system/kindnet-bktcq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bktcq"
	E1204 23:30:54.936329       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bktcq\": pod kindnet-bktcq is already assigned to node \"ha-098000-m04\"" pod="kube-system/kindnet-bktcq"
	I1204 23:30:54.936384       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bktcq" node="ha-098000-m04"
	E1204 23:30:54.935423       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	E1204 23:30:54.935358       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mz4q2" node="ha-098000-m04"
	E1204 23:30:54.937674       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mz4q2\": pod kube-proxy-mz4q2 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-mz4q2"
	E1204 23:30:54.939537       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c066164d-5b0a-40ca-93b9-d13c732f8d23(kube-system/kube-proxy-rgp97) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rgp97"
	E1204 23:30:54.939583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rgp97\": pod kube-proxy-rgp97 is already assigned to node \"ha-098000-m04\"" pod="kube-system/kube-proxy-rgp97"
	I1204 23:30:54.939599       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rgp97" node="ha-098000-m04"
	I1204 23:32:45.399421       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1204 23:32:45.401282       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:32:45.403399       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1204 23:32:45.416820       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [832c9a15fccb] <==
	I1204 23:33:19.647940       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:33:30.004268       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1204 23:33:30.004311       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:33:30.004317       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:33:38.634637       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:33:38.636924       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:33:38.643589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:33:38.644074       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:33:38.644906       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:33:38.645277       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:33:38.745790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:34:37 ha-098000 kubelet[1562]: E1204 23:34:37.992685    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:34:39 ha-098000 kubelet[1562]: I1204 23:34:39.407493    1562 scope.go:117] "RemoveContainer" containerID="d45b7ca2c321bb88eb0207b6b8d2cc8e28c3a5dfeb3831e851f9d73934d05579"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: I1204 23:34:53.407334    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:34:53 ha-098000 kubelet[1562]: E1204 23:34:53.407489    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: I1204 23:35:05.407398    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:05 ha-098000 kubelet[1562]: E1204 23:35:05.407597    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:11 ha-098000 kubelet[1562]: E1204 23:35:11.434026    1562 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 23:35:11 ha-098000 kubelet[1562]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 23:35:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: I1204 23:35:18.406757    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:18 ha-098000 kubelet[1562]: E1204 23:35:18.407152    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: I1204 23:35:31.407458    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:31 ha-098000 kubelet[1562]: E1204 23:35:31.408574    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: I1204 23:35:42.407061    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:42 ha-098000 kubelet[1562]: E1204 23:35:42.407183    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: I1204 23:35:54.407450    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:35:54 ha-098000 kubelet[1562]: E1204 23:35:54.407806    1562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f7564fc1-72eb-47fc-a159-c6463cf27fb3)\"" pod="kube-system/storage-provisioner" podUID="f7564fc1-72eb-47fc-a159-c6463cf27fb3"
	Dec 04 23:36:08 ha-098000 kubelet[1562]: I1204 23:36:08.406922    1562 scope.go:117] "RemoveContainer" containerID="59729ff8ece5d7271c881a1f8b764e54fa3eb651a09ea5485de6229cdf7a4c30"
	Dec 04 23:36:11 ha-098000 kubelet[1562]: E1204 23:36:11.431349    1562 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 23:36:11 ha-098000 kubelet[1562]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 23:36:11 ha-098000 kubelet[1562]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-098000 -n ha-098000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-098000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-pfjg5
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-098000 describe pod busybox-7dff88458-pfjg5
helpers_test.go:282: (dbg) kubectl --context ha-098000 describe pod busybox-7dff88458-pfjg5:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-pfjg5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgxgh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fgxgh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.56s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (78.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-302000 --driver=hyperkit 
E1204 15:41:36.518517   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-302000 --driver=hyperkit : exit status 90 (1m18.08973776s)

                                                
                                                
-- stdout --
	* [image-302000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "image-302000" primary control-plane node in "image-302000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 04 23:41:30 image-302000 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:41:30 image-302000 dockerd[511]: time="2024-12-04T23:41:30.255068752Z" level=info msg="Starting up"
	Dec 04 23:41:30 image-302000 dockerd[511]: time="2024-12-04T23:41:30.255550223Z" level=info msg="containerd not running, starting managed containerd"
	Dec 04 23:41:30 image-302000 dockerd[511]: time="2024-12-04T23:41:30.256147393Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.272474641Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.289978935Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290043617Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290106897Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290144004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290219805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290260843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290405718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290445624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290476931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290506165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290655233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.290838161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292440145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292531923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292675513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292719491Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292814704Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.292886312Z" level=info msg="metadata content store policy set" policy=shared
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295341109Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295422848Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295519780Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295562638Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295596719Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295685141Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.295925435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296043012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296083293Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296115861Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296153149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296190887Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296224919Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296256332Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296292277Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296325046Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296361037Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296401376Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296452552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296491193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296523262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296605641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296651550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296683039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296713478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296743761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296777385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296809331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296839069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296878251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296912195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296944263Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.296986587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297022178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297052675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297127992Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297171898Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297205902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297235768Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297264588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297294174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297324784Z" level=info msg="NRI interface is disabled by configuration."
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297498396Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297562667Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297616113Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 04 23:41:30 image-302000 dockerd[519]: time="2024-12-04T23:41:30.297658722Z" level=info msg="containerd successfully booted in 0.025885s"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.288111420Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.293914628Z" level=info msg="Loading containers: start."
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.388359411Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.476306346Z" level=info msg="Loading containers: done."
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.483081346Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.483113825Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.483128893Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.483189685Z" level=info msg="Daemon has completed initialization"
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.509240101Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 04 23:41:31 image-302000 systemd[1]: Started Docker Application Container Engine.
	Dec 04 23:41:31 image-302000 dockerd[511]: time="2024-12-04T23:41:31.511790036Z" level=info msg="API listen on [::]:2376"
	Dec 04 23:41:32 image-302000 dockerd[511]: time="2024-12-04T23:41:32.489605330Z" level=info msg="Processing signal 'terminated'"
	Dec 04 23:41:32 image-302000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 04 23:41:32 image-302000 dockerd[511]: time="2024-12-04T23:41:32.490834979Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 04 23:41:32 image-302000 dockerd[511]: time="2024-12-04T23:41:32.491117377Z" level=info msg="Daemon shutdown complete"
	Dec 04 23:41:32 image-302000 dockerd[511]: time="2024-12-04T23:41:32.491198637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 04 23:41:32 image-302000 dockerd[511]: time="2024-12-04T23:41:32.491238101Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 04 23:41:33 image-302000 systemd[1]: docker.service: Deactivated successfully.
	Dec 04 23:41:33 image-302000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 04 23:41:33 image-302000 systemd[1]: Starting Docker Application Container Engine...
	Dec 04 23:41:33 image-302000 dockerd[916]: time="2024-12-04T23:41:33.529185586Z" level=info msg="Starting up"
	Dec 04 23:42:33 image-302000 dockerd[916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 04 23:42:33 image-302000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 04 23:42:33 image-302000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 04 23:42:33 image-302000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-302000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p image-302000 -n image-302000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p image-302000 -n image-302000: exit status 6 (174.253404ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 15:42:33.699586   20620 status.go:458] kubeconfig endpoint: get endpoint: "image-302000" does not appear in /Users/jenkins/minikube-integration/20045-17258/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-302000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (78.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (137.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-846000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1204 15:45:53.814345   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:46:36.527547   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:47:16.890424   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-846000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m17.067882422s)

                                                
                                                
-- stdout --
	* [mount-start-1-846000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-846000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-846000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:fb:03:ff:d8:48
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-846000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:d9:ce:9f:34:c7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:d9:ce:9f:34:c7
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-846000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-846000 -n mount-start-1-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-846000 -n mount-start-1-846000: exit status 7 (99.707732ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 15:48:00.141351   20865 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 15:48:00.141380   20865 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-846000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (137.17s)

                                                
                                    
x
+
TestScheduledStopUnix (142.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-948000 --memory=2048 --driver=hyperkit 
E1204 16:00:53.940842   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:01:36.654542   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-948000 --memory=2048 --driver=hyperkit : exit status 80 (2m17.061054721s)

                                                
                                                
-- stdout --
	* [scheduled-stop-948000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-948000" primary control-plane node in "scheduled-stop-948000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-948000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 0e:99:e3:f2:2e:ec
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-948000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:d8:eb:5b:62:87
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:d8:eb:5b:62:87
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-948000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-948000" primary control-plane node in "scheduled-stop-948000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-948000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 0e:99:e3:f2:2e:ec
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-948000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:d8:eb:5b:62:87
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:d8:eb:5b:62:87
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-04 16:02:17.296339 -0800 PST m=+2987.997594516
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-948000 -n scheduled-stop-948000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-948000 -n scheduled-stop-948000: exit status 7 (102.601055ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:02:17.396844   22046 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:02:17.396866   22046 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-948000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-948000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-948000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-948000: (5.278269324s)
--- FAIL: TestScheduledStopUnix (142.44s)

                                                
                                    
x
+
TestPause/serial/Start (141.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-609000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-609000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.14555005s)

                                                
                                                
-- stdout --
	* [pause-609000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-609000" primary control-plane node in "pause-609000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-609000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:16:4b:07:d6:8d
	* Failed to start hyperkit VM. Running "minikube delete -p pause-609000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:2f:6b:12:ed:c4
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5a:2f:6b:12:ed:c4
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-609000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-609000 -n pause-609000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-609000 -n pause-609000: exit status 7 (113.090416ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 16:44:04.194720   24872 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1204 16:44:04.194744   24872 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-609000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.26s)

                                                
                                    

Test pass (289/324)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 0.26
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.24
12 TestDownloadOnly/v1.31.2/json-events 7.46
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.31
18 TestDownloadOnly/v1.31.2/DeleteAll 0.26
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 1.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 221.56
29 TestAddons/serial/Volcano 42.59
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.61
35 TestAddons/parallel/Registry 14.38
36 TestAddons/parallel/Ingress 19.77
37 TestAddons/parallel/InspektorGadget 10.46
38 TestAddons/parallel/MetricsServer 5.51
40 TestAddons/parallel/CSI 53.32
41 TestAddons/parallel/Headlamp 17.41
42 TestAddons/parallel/CloudSpanner 5.43
43 TestAddons/parallel/LocalPath 53.62
44 TestAddons/parallel/NvidiaDevicePlugin 5.37
45 TestAddons/parallel/Yakd 11.48
47 TestAddons/StoppedEnableDisable 6.01
55 TestHyperKitDriverInstallOrUpdate 9.08
58 TestErrorSpam/setup 39.31
59 TestErrorSpam/start 1.74
60 TestErrorSpam/status 0.59
61 TestErrorSpam/pause 1.43
62 TestErrorSpam/unpause 1.44
63 TestErrorSpam/stop 155.83
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.87
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.92
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.43
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.1
77 TestFunctional/serial/CacheCmd/cache/list 0.09
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
80 TestFunctional/serial/CacheCmd/cache/delete 0.19
81 TestFunctional/serial/MinikubeKubectlCmd 1.16
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.76
83 TestFunctional/serial/ExtraConfig 40.24
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 2.72
86 TestFunctional/serial/LogsFileCmd 2.65
87 TestFunctional/serial/InvalidService 5.7
89 TestFunctional/parallel/ConfigCmd 0.57
90 TestFunctional/parallel/DashboardCmd 10.51
91 TestFunctional/parallel/DryRun 1.2
92 TestFunctional/parallel/InternationalLanguage 0.63
93 TestFunctional/parallel/StatusCmd 0.56
97 TestFunctional/parallel/ServiceCmdConnect 12.43
98 TestFunctional/parallel/AddonsCmd 0.26
99 TestFunctional/parallel/PersistentVolumeClaim 27.42
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.1
103 TestFunctional/parallel/MySQL 24.9
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.04
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
113 TestFunctional/parallel/License 0.5
114 TestFunctional/parallel/Version/short 0.17
115 TestFunctional/parallel/Version/components 0.37
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.59
121 TestFunctional/parallel/ImageCommands/Setup 1.75
122 TestFunctional/parallel/DockerEnv/bash 0.62
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.79
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.64
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.24
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.05
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
145 TestFunctional/parallel/ServiceCmd/List 0.8
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.8
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
148 TestFunctional/parallel/ServiceCmd/Format 0.47
149 TestFunctional/parallel/ServiceCmd/URL 0.46
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
151 TestFunctional/parallel/ProfileCmd/profile_list 0.33
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
153 TestFunctional/parallel/MountCmd/any-port 6.58
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 197.12
163 TestMultiControlPlane/serial/DeployApp 5.76
164 TestMultiControlPlane/serial/PingHostFromPods 1.42
165 TestMultiControlPlane/serial/AddWorkerNode 49.57
166 TestMultiControlPlane/serial/NodeLabels 0.24
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
168 TestMultiControlPlane/serial/CopyFile 10.57
169 TestMultiControlPlane/serial/StopSecondaryNode 8.77
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.45
171 TestMultiControlPlane/serial/RestartSecondaryNode 40.32
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
176 TestMultiControlPlane/serial/StopCluster 24.99
177 TestMultiControlPlane/serial/RestartCluster 164.39
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.43
179 TestMultiControlPlane/serial/AddSecondaryNode 75.18
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
186 TestJSONOutput/start/Command 84.61
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.5
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.47
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 8.35
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.63
214 TestMainNoArgs 0.09
215 TestMinikubeProfile 88.92
221 TestMultiNode/serial/FreshStart2Nodes 109.66
222 TestMultiNode/serial/DeployApp2Nodes 4.99
223 TestMultiNode/serial/PingHostFrom2Pods 0.97
224 TestMultiNode/serial/AddNode 45.52
225 TestMultiNode/serial/MultiNodeLabels 0.07
226 TestMultiNode/serial/ProfileList 0.39
227 TestMultiNode/serial/CopyFile 5.92
228 TestMultiNode/serial/StopNode 2.92
229 TestMultiNode/serial/StartAfterStop 41.68
230 TestMultiNode/serial/RestartKeepsNodes 156.48
231 TestMultiNode/serial/DeleteNode 3.42
232 TestMultiNode/serial/StopMultiNode 16.85
233 TestMultiNode/serial/RestartMultiNode 123.55
234 TestMultiNode/serial/ValidateNameConflict 45.53
238 TestPreload 145.76
241 TestSkaffold 115.57
244 TestRunningBinaryUpgrade 99.77
246 TestKubernetesUpgrade 1382.17
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.46
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.02
261 TestStoppedBinaryUpgrade/Setup 1.4
262 TestStoppedBinaryUpgrade/Upgrade 124.62
265 TestStoppedBinaryUpgrade/MinikubeLogs 2.41
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.59
275 TestNoKubernetes/serial/StartWithK8s 75.34
276 TestNetworkPlugins/group/auto/Start 65.23
277 TestNoKubernetes/serial/StartWithStopK8s 17.82
278 TestNoKubernetes/serial/Start 19.48
279 TestNetworkPlugins/group/auto/KubeletFlags 0.17
280 TestNetworkPlugins/group/auto/NetCatPod 11.16
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
282 TestNoKubernetes/serial/ProfileList 0.65
283 TestNoKubernetes/serial/Stop 2.41
284 TestNetworkPlugins/group/auto/DNS 0.17
285 TestNoKubernetes/serial/StartNoArgs 19.37
286 TestNetworkPlugins/group/auto/Localhost 0.1
287 TestNetworkPlugins/group/auto/HairPin 0.1
288 TestNetworkPlugins/group/flannel/Start 52.66
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.15
290 TestNetworkPlugins/group/enable-default-cni/Start 58.88
291 TestNetworkPlugins/group/flannel/ControllerPod 6.01
292 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
293 TestNetworkPlugins/group/flannel/NetCatPod 13.18
294 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
295 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.14
296 TestNetworkPlugins/group/flannel/DNS 0.13
297 TestNetworkPlugins/group/flannel/Localhost 0.11
298 TestNetworkPlugins/group/flannel/HairPin 0.11
299 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
300 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
301 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
302 TestNetworkPlugins/group/bridge/Start 51.8
303 TestNetworkPlugins/group/kindnet/Start 79.46
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
305 TestNetworkPlugins/group/bridge/NetCatPod 13.14
306 TestNetworkPlugins/group/bridge/DNS 0.13
307 TestNetworkPlugins/group/bridge/Localhost 0.11
308 TestNetworkPlugins/group/bridge/HairPin 0.11
309 TestNetworkPlugins/group/kindnet/ControllerPod 6
310 TestNetworkPlugins/group/kubenet/Start 74.5
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
312 TestNetworkPlugins/group/kindnet/NetCatPod 12.16
313 TestNetworkPlugins/group/kindnet/DNS 0.13
314 TestNetworkPlugins/group/kindnet/Localhost 0.11
315 TestNetworkPlugins/group/kindnet/HairPin 0.11
316 TestNetworkPlugins/group/custom-flannel/Start 54.64
317 TestNetworkPlugins/group/kubenet/KubeletFlags 0.2
318 TestNetworkPlugins/group/kubenet/NetCatPod 11.14
319 TestNetworkPlugins/group/kubenet/DNS 0.13
320 TestNetworkPlugins/group/kubenet/Localhost 0.1
321 TestNetworkPlugins/group/kubenet/HairPin 0.12
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.16
324 TestNetworkPlugins/group/custom-flannel/DNS 0.15
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
327 TestNetworkPlugins/group/calico/Start 66.65
328 TestNetworkPlugins/group/false/Start 165.01
329 TestNetworkPlugins/group/calico/ControllerPod 6
330 TestNetworkPlugins/group/calico/KubeletFlags 0.18
331 TestNetworkPlugins/group/calico/NetCatPod 11.14
332 TestNetworkPlugins/group/calico/DNS 0.14
333 TestNetworkPlugins/group/calico/Localhost 0.11
334 TestNetworkPlugins/group/calico/HairPin 0.12
336 TestStartStop/group/old-k8s-version/serial/FirstStart 163.59
337 TestNetworkPlugins/group/false/KubeletFlags 0.18
338 TestNetworkPlugins/group/false/NetCatPod 12.14
339 TestNetworkPlugins/group/false/DNS 0.12
340 TestNetworkPlugins/group/false/Localhost 0.11
341 TestNetworkPlugins/group/false/HairPin 0.1
343 TestStartStop/group/no-preload/serial/FirstStart 82.56
344 TestStartStop/group/old-k8s-version/serial/DeployApp 8.32
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
346 TestStartStop/group/old-k8s-version/serial/Stop 8.43
347 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.36
348 TestStartStop/group/old-k8s-version/serial/SecondStart 399.66
349 TestStartStop/group/no-preload/serial/DeployApp 8.22
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
351 TestStartStop/group/no-preload/serial/Stop 8.48
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.36
353 TestStartStop/group/no-preload/serial/SecondStart 291.4
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.18
357 TestStartStop/group/no-preload/serial/Pause 2.06
359 TestStartStop/group/embed-certs/serial/FirstStart 48.31
360 TestStartStop/group/embed-certs/serial/DeployApp 8.2
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
362 TestStartStop/group/embed-certs/serial/Stop 8.45
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.36
365 TestStartStop/group/embed-certs/serial/SecondStart 290.91
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.18
368 TestStartStop/group/old-k8s-version/serial/Pause 2.03
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.94
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.21
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.8
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.46
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.36
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 294.62
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.18
379 TestStartStop/group/embed-certs/serial/Pause 2.07
381 TestStartStop/group/newest-cni/serial/FirstStart 41.85
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
384 TestStartStop/group/newest-cni/serial/Stop 8.43
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
386 TestStartStop/group/newest-cni/serial/SecondStart 30.26
387 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.35
391 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
393 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
394 TestStartStop/group/newest-cni/serial/Pause 2.05
x
+
TestDownloadOnly/v1.20.0/json-events (14.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-862000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-862000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (14.990895448s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 15:12:44.145521   17821 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1204 15:12:44.145718   17821 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-862000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-862000: exit status 85 (306.109922ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-862000 | jenkins | v1.34.0 | 04 Dec 24 15:12 PST |          |
	|         | -p download-only-862000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:12:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:12:29.220136   17822 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:12:29.220453   17822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:12:29.220459   17822 out.go:358] Setting ErrFile to fd 2...
	I1204 15:12:29.220463   17822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:12:29.220657   17822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	W1204 15:12:29.220762   17822 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20045-17258/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20045-17258/.minikube/config/config.json: no such file or directory
	I1204 15:12:29.222775   17822 out.go:352] Setting JSON to true
	I1204 15:12:29.251642   17822 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4319,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:12:29.251808   17822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:12:29.273841   17822 out.go:97] [download-only-862000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:12:29.274021   17822 notify.go:220] Checking for updates...
	W1204 15:12:29.274097   17822 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 15:12:29.295863   17822 out.go:169] MINIKUBE_LOCATION=20045
	I1204 15:12:29.318969   17822 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:12:29.340751   17822 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:12:29.362073   17822 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:12:29.383833   17822 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	W1204 15:12:29.425925   17822 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 15:12:29.426437   17822 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:12:29.514984   17822 out.go:97] Using the hyperkit driver based on user configuration
	I1204 15:12:29.515044   17822 start.go:297] selected driver: hyperkit
	I1204 15:12:29.515060   17822 start.go:901] validating driver "hyperkit" against <nil>
	I1204 15:12:29.515255   17822 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:12:29.515587   17822 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:12:29.905582   17822 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:12:29.911953   17822 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:12:29.911976   17822 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:12:29.912003   17822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:12:29.917596   17822 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1204 15:12:29.917742   17822 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:12:29.917773   17822 cni.go:84] Creating CNI manager for ""
	I1204 15:12:29.917817   17822 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:12:29.917888   17822 start.go:340] cluster config:
	{Name:download-only-862000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-862000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:12:29.918148   17822 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:12:29.939879   17822 out.go:97] Downloading VM boot image ...
	I1204 15:12:29.939966   17822 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1204 15:12:35.768814   17822 out.go:97] Starting "download-only-862000" primary control-plane node in "download-only-862000" cluster
	I1204 15:12:35.768851   17822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:12:35.830648   17822 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1204 15:12:35.830687   17822 cache.go:56] Caching tarball of preloaded images
	I1204 15:12:35.831101   17822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:12:35.852646   17822 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 15:12:35.852673   17822 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1204 15:12:35.943332   17822 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-862000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-862000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-862000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-859000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-859000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit : (7.459448665s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 15:12:52.407819   17821 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1204 15:12:52.407859   17821 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-859000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-859000: exit status 85 (307.377809ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-862000 | jenkins | v1.34.0 | 04 Dec 24 15:12 PST |                     |
	|         | -p download-only-862000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:12 PST | 04 Dec 24 15:12 PST |
	| delete  | -p download-only-862000        | download-only-862000 | jenkins | v1.34.0 | 04 Dec 24 15:12 PST | 04 Dec 24 15:12 PST |
	| start   | -o=json --download-only        | download-only-859000 | jenkins | v1.34.0 | 04 Dec 24 15:12 PST |                     |
	|         | -p download-only-859000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:12:45
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:12:45.013596   17848 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:12:45.014429   17848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:12:45.014437   17848 out.go:358] Setting ErrFile to fd 2...
	I1204 15:12:45.014444   17848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:12:45.014776   17848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:12:45.016702   17848 out.go:352] Setting JSON to true
	I1204 15:12:45.045293   17848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4335,"bootTime":1733349630,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:12:45.045456   17848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:12:45.068203   17848 out.go:97] [download-only-859000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:12:45.068389   17848 notify.go:220] Checking for updates...
	I1204 15:12:45.089232   17848 out.go:169] MINIKUBE_LOCATION=20045
	I1204 15:12:45.110571   17848 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:12:45.132308   17848 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:12:45.153211   17848 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:12:45.174350   17848 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	W1204 15:12:45.216243   17848 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 15:12:45.216851   17848 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:12:45.249262   17848 out.go:97] Using the hyperkit driver based on user configuration
	I1204 15:12:45.249312   17848 start.go:297] selected driver: hyperkit
	I1204 15:12:45.249328   17848 start.go:901] validating driver "hyperkit" against <nil>
	I1204 15:12:45.249526   17848 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:12:45.249805   17848 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/20045-17258/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1204 15:12:45.262188   17848 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1204 15:12:45.268735   17848 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:12:45.268764   17848 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1204 15:12:45.268787   17848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:12:45.273876   17848 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1204 15:12:45.274039   17848 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:12:45.274070   17848 cni.go:84] Creating CNI manager for ""
	I1204 15:12:45.274116   17848 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:12:45.274124   17848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:12:45.274203   17848 start.go:340] cluster config:
	{Name:download-only-859000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-859000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:12:45.274287   17848 iso.go:125] acquiring lock: {Name:mkebe69a28e14b2d56d585dc8f8608288176f34e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:12:45.295350   17848 out.go:97] Starting "download-only-859000" primary control-plane node in "download-only-859000" cluster
	I1204 15:12:45.295383   17848 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:12:45.354570   17848 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1204 15:12:45.354621   17848 cache.go:56] Caching tarball of preloaded images
	I1204 15:12:45.355092   17848 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:12:45.376346   17848 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 15:12:45.376394   17848 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1204 15:12:45.475513   17848 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4?checksum=md5:979f32540b837894423b337fec69fbf6 -> /Users/jenkins/minikube-integration/20045-17258/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-859000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-859000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-859000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 15:12:53.686705   17821 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-935000 --alsologtostderr --binary-mirror http://127.0.0.1:56311 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-935000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-778000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-778000: exit status 85 (202.054492ms)

                                                
                                                
-- stdout --
	* Profile "addons-778000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-778000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-778000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-778000: exit status 85 (222.311112ms)

                                                
                                                
-- stdout --
	* Profile "addons-778000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-778000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (221.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-778000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-778000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m41.557821001s)
--- PASS: TestAddons/Setup (221.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 13.651636ms
addons_test.go:807: volcano-scheduler stabilized in 13.701956ms
addons_test.go:823: volcano-controller stabilized in 13.996389ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-tfwd2" [cc4ef9e9-11c1-400a-b291-7b54541419c9] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002742099s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-w68nq" [35350e36-283e-4258-831b-b5d3e0fc5013] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004489356s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-m56f2" [49f206d8-25dd-4f17-8262-04c75dea22b0] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004477532s
addons_test.go:842: (dbg) Run:  kubectl --context addons-778000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-778000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-778000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [8ce385e1-e34d-4fc3-aa9f-0573553ea660] Pending
helpers_test.go:344: "test-job-nginx-0" [8ce385e1-e34d-4fc3-aa9f-0573553ea660] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [8ce385e1-e34d-4fc3-aa9f-0573553ea660] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004665402s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable volcano --alsologtostderr -v=1: (11.246828084s)
--- PASS: TestAddons/serial/Volcano (42.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-778000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-778000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.61s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-778000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-778000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f3cb55e8-83a5-4a59-ab05-70771f081bba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f3cb55e8-83a5-4a59-ab05-70771f081bba] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003846629s
addons_test.go:633: (dbg) Run:  kubectl --context addons-778000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-778000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-778000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-778000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.751624ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-4t8gf" [e85adf8c-afc7-4600-9ba7-e538a450bd10] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003463545s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5qzx4" [f8b381f4-3ed1-405b-bd18-4c8dfae037d4] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003731891s
addons_test.go:331: (dbg) Run:  kubectl --context addons-778000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-778000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-778000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.648504497s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 ip
2024/12/04 15:17:51 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-778000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-778000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-778000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4b891820-4fb7-4ccc-b6be-18728493cd3f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4b891820-4fb7-4ccc-b6be-18728493cd3f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005296321s
I1204 15:19:14.639973   17821 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-778000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable ingress-dns --alsologtostderr -v=1: (1.288008598s)
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable ingress --alsologtostderr -v=1: (7.457467931s)
--- PASS: TestAddons/parallel/Ingress (19.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w6vzs" [bfdf2eb3-2fac-49db-bf2f-d38fefdde718] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004399845s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.453297336s)
--- PASS: TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.666908ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9cnjm" [886ad9da-79aa-4aba-907c-5e0fddd2bffc] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002658992s
addons_test.go:402: (dbg) Run:  kubectl --context addons-778000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1204 15:18:14.185099   17821 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1204 15:18:14.188945   17821 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1204 15:18:14.188957   17821 kapi.go:107] duration metric: took 3.871219ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.87738ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-778000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-778000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [66567013-98a2-4cef-80e5-e8772b4e8a05] Pending
helpers_test.go:344: "task-pv-pod" [66567013-98a2-4cef-80e5-e8772b4e8a05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [66567013-98a2-4cef-80e5-e8772b4e8a05] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005905473s
addons_test.go:511: (dbg) Run:  kubectl --context addons-778000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-778000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-778000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-778000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-778000 delete pod task-pv-pod: (1.227310715s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-778000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-778000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-778000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [37ad9b22-1ee0-4755-939a-2efcbddaef46] Pending
helpers_test.go:344: "task-pv-pod-restore" [37ad9b22-1ee0-4755-939a-2efcbddaef46] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [37ad9b22-1ee0-4755-939a-2efcbddaef46] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002730769s
addons_test.go:553: (dbg) Run:  kubectl --context addons-778000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-778000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-778000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.511657703s)
--- PASS: TestAddons/parallel/CSI (53.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-778000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-42vn5" [5d3349c0-3147-40b2-bd33-ebc596bea7fd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-42vn5" [5d3349c0-3147-40b2-bd33-ebc596bea7fd] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006048699s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable headlamp --alsologtostderr -v=1: (5.476100054s)
--- PASS: TestAddons/parallel/Headlamp (17.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-nzw9q" [eb5074a2-e6f4-4169-ae05-b2e4d50b4189] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002946134s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-778000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-778000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e7e9de19-2a12-42a5-99d1-37bee57ea5a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e7e9de19-2a12-42a5-99d1-37bee57ea5a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e7e9de19-2a12-42a5-99d1-37bee57ea5a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00289035s
addons_test.go:906: (dbg) Run:  kubectl --context addons-778000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 ssh "cat /opt/local-path-provisioner/pvc-e9968669-5edf-4df5-ab17-3894cc86ce8a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-778000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-778000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.906859594s)
--- PASS: TestAddons/parallel/LocalPath (53.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cg7kv" [8497db4c-23c9-4e9b-9b2a-6935f8c1a7f1] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005361924s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-s6tzj" [de292ef0-38b9-4c41-914c-465bf22c4a7a] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006151548s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-778000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-778000 addons disable yakd --alsologtostderr -v=1: (5.477932254s)
--- PASS: TestAddons/parallel/Yakd (11.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-778000
addons_test.go:170: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-778000: (5.406434567s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-778000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-778000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-778000
--- PASS: TestAddons/StoppedEnableDisable (6.01s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.08s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1204 16:04:35.086430   17821 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 16:04:35.086603   17821 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W1204 16:04:35.841909   17821 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1204 16:04:35.842123   17821 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 16:04:35.842182   17821 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit
I1204 16:04:36.330115   17821 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x71df100 0x71df100 0x71df100 0x71df100 0x71df100 0x71df100 0x71df100] Decompressors:map[bz2:0xc000888e40 gz:0xc000888e48 tar:0xc000888df0 tar.bz2:0xc000888e00 tar.gz:0xc000888e10 tar.xz:0xc000888e20 tar.zst:0xc000888e30 tbz2:0xc000888e00 tgz:0xc000888e10 txz:0xc000888e20 tzst:0xc000888e30 xz:0xc000888e50 zip:0xc000888e60 zst:0xc000888e58] Getters:map[file:0xc0006d02d0 http:0xc00087f900 https:0xc00087f950] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 16:04:36.330156   17821 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit
I1204 16:04:39.673315   17821 install.go:79] stdout: 
W1204 16:04:39.673469   17821 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 16:04:39.673504   17821 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit]
I1204 16:04:39.695506   17821 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit]
I1204 16:04:39.716208   17821 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit]
I1204 16:04:39.735922   17821 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/001/docker-machine-driver-hyperkit]
I1204 16:04:39.775454   17821 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 16:04:39.775583   17821 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I1204 16:04:40.478448   17821 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1204 16:04:40.478473   17821 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1204 16:04:40.478537   17821 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 16:04:40.478573   17821 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit
I1204 16:04:40.880536   17821 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x71df100 0x71df100 0x71df100 0x71df100 0x71df100 0x71df100 0x71df100] Decompressors:map[bz2:0xc000888e40 gz:0xc000888e48 tar:0xc000888df0 tar.bz2:0xc000888e00 tar.gz:0xc000888e10 tar.xz:0xc000888e20 tar.zst:0xc000888e30 tbz2:0xc000888e00 tgz:0xc000888e10 txz:0xc000888e20 tzst:0xc000888e30 xz:0xc000888e50 zip:0xc000888e60 zst:0xc000888e58] Getters:map[file:0xc0007641b0 http:0xc000729360 https:0xc0007293b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 16:04:40.880571   17821 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit
I1204 16:04:44.057874   17821 install.go:79] stdout: 
W1204 16:04:44.058042   17821 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 16:04:44.058073   17821 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit]
I1204 16:04:44.078938   17821 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit]
I1204 16:04:44.100257   17821 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit]
I1204 16:04:44.119791   17821 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate3173693798/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (9.08s)

                                                
                                    
x
+
TestErrorSpam/setup (39.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-974000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-974000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 --driver=hyperkit : (39.314028118s)
--- PASS: TestErrorSpam/setup (39.31s)

                                                
                                    
x
+
TestErrorSpam/start (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 start --dry-run
--- PASS: TestErrorSpam/start (1.74s)

                                                
                                    
x
+
TestErrorSpam/status (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 status
--- PASS: TestErrorSpam/status (0.59s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (155.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop: (5.385687082s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop: (1m15.22551475s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop
E1204 15:21:36.479259   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.486314   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.499359   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.521017   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.563079   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.645216   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:36.808831   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:37.132362   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:37.774262   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:39.056914   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:41.619545   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:46.743340   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:21:56.987273   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:22:17.471192   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-974000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-974000 stop: (1m15.221712311s)
--- PASS: TestErrorSpam/stop (155.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20045-17258/.minikube/files/etc/test/nested/copy/17821/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E1204 15:22:58.434411   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-084000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m17.870200541s)
--- PASS: TestFunctional/serial/StartWithProxy (77.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 15:24:09.336823   17821 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --alsologtostderr -v=8
E1204 15:24:20.360513   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-084000 --alsologtostderr -v=8: (40.917654014s)
functional_test.go:663: soft start took 40.918065067s for "functional-084000" cluster.
I1204 15:24:50.255779   17821 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (40.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-084000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:3.1: (1.141733809s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:3.3: (1.175025485s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 cache add registry.k8s.io/pause:latest: (1.0011787s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3797626839/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache add minikube-local-cache-test:functional-084000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache delete minikube-local-cache-test:functional-084000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-084000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (168.848974ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 kubectl -- --context functional-084000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 kubectl -- --context functional-084000 get pods: (1.155081807s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-084000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-084000 get pods: (1.758691308s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.76s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-084000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.23628141s)
functional_test.go:761: restart took 40.236407424s for "functional-084000" cluster.
I1204 15:25:40.022029   17821 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (40.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-084000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 logs: (2.72403732s)
--- PASS: TestFunctional/serial/LogsCmd (2.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2288122435/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2288122435/001/logs.txt: (2.645977711s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-084000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-084000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-084000: exit status 115 (311.347241ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31048 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-084000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-084000 delete -f testdata/invalidsvc.yaml: (2.253749632s)
--- PASS: TestFunctional/serial/InvalidService (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 config get cpus: exit status 14 (69.526549ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 config get cpus: exit status 14 (69.442947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-084000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-084000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19080: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-084000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (614.9128ms)

                                                
                                                
-- stdout --
	* [functional-084000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:26:47.145080   19031 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:26:47.145323   19031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:26:47.145329   19031 out.go:358] Setting ErrFile to fd 2...
	I1204 15:26:47.145332   19031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:26:47.145509   19031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:26:47.146903   19031 out.go:352] Setting JSON to false
	I1204 15:26:47.175370   19031 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5177,"bootTime":1733349630,"procs":590,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:26:47.175526   19031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:26:47.215352   19031 out.go:177] * [functional-084000] minikube v1.34.0 on Darwin 15.0.1
	I1204 15:26:47.289589   19031 notify.go:220] Checking for updates...
	I1204 15:26:47.310239   19031 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:26:47.331207   19031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:26:47.352338   19031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:26:47.373265   19031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:26:47.394413   19031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:26:47.436235   19031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:26:47.479182   19031 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:26:47.479866   19031 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:26:47.479935   19031 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:26:47.492043   19031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57460
	I1204 15:26:47.492412   19031 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:26:47.492859   19031 main.go:141] libmachine: Using API Version  1
	I1204 15:26:47.492881   19031 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:26:47.493143   19031 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:26:47.493279   19031 main.go:141] libmachine: (functional-084000) Calling .DriverName
	I1204 15:26:47.493487   19031 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:26:47.493756   19031 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:26:47.493788   19031 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:26:47.504753   19031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57462
	I1204 15:26:47.505126   19031 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:26:47.505484   19031 main.go:141] libmachine: Using API Version  1
	I1204 15:26:47.505497   19031 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:26:47.505714   19031 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:26:47.505824   19031 main.go:141] libmachine: (functional-084000) Calling .DriverName
	I1204 15:26:47.537496   19031 out.go:177] * Using the hyperkit driver based on existing profile
	I1204 15:26:47.579410   19031 start.go:297] selected driver: hyperkit
	I1204 15:26:47.579443   19031 start.go:901] validating driver "hyperkit" against &{Name:functional-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:26:47.579670   19031 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:26:47.607263   19031 out.go:201] 
	W1204 15:26:47.630358   19031 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 15:26:47.651381   19031 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-084000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-084000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (633.222075ms)

                                                
                                                
-- stdout --
	* [functional-084000] minikube v1.34.0 sur Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:26:46.501251   19024 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:26:46.501450   19024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:26:46.501455   19024 out.go:358] Setting ErrFile to fd 2...
	I1204 15:26:46.501458   19024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:26:46.501635   19024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:26:46.503234   19024 out.go:352] Setting JSON to false
	I1204 15:26:46.532638   19024 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5176,"bootTime":1733349630,"procs":590,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1204 15:26:46.532813   19024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:26:46.556657   19024 out.go:177] * [functional-084000] minikube v1.34.0 sur Darwin 15.0.1
	I1204 15:26:46.598439   19024 notify.go:220] Checking for updates...
	I1204 15:26:46.636410   19024 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:26:46.680099   19024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	I1204 15:26:46.701314   19024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1204 15:26:46.722227   19024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:26:46.743352   19024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	I1204 15:26:46.764258   19024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:26:46.785568   19024 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:26:46.785934   19024 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:26:46.785974   19024 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:26:46.797579   19024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57455
	I1204 15:26:46.797937   19024 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:26:46.798345   19024 main.go:141] libmachine: Using API Version  1
	I1204 15:26:46.798359   19024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:26:46.798585   19024 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:26:46.798690   19024 main.go:141] libmachine: (functional-084000) Calling .DriverName
	I1204 15:26:46.798900   19024 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:26:46.799162   19024 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:26:46.799199   19024 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:26:46.810637   19024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57457
	I1204 15:26:46.810973   19024 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:26:46.811342   19024 main.go:141] libmachine: Using API Version  1
	I1204 15:26:46.811364   19024 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:26:46.811596   19024 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:26:46.811705   19024 main.go:141] libmachine: (functional-084000) Calling .DriverName
	I1204 15:26:46.843085   19024 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1204 15:26:46.901345   19024 start.go:297] selected driver: hyperkit
	I1204 15:26:46.901376   19024 start.go:901] validating driver "hyperkit" against &{Name:functional-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:26:46.901602   19024 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:26:46.929161   19024 out.go:201] 
	W1204 15:26:46.966306   19024 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 15:26:47.003479   19024 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-084000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-084000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dh7h" [2798956b-c7db-4a3a-a62e-8c072135edd2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dh7h" [2798956b-c7db-4a3a-a62e-8c072135edd2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.017909917s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31659
functional_test.go:1675: http://192.169.0.4:31659: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5dh7h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31659
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [500dc3a7-da63-4a83-8c7c-7a12e2543ec6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003267713s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-084000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-084000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-084000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-084000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6dd55bbd-d6b7-4d9d-9ee3-844a9afc52f9] Pending
helpers_test.go:344: "sp-pod" [6dd55bbd-d6b7-4d9d-9ee3-844a9afc52f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6dd55bbd-d6b7-4d9d-9ee3-844a9afc52f9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005138156s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-084000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-084000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-084000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a76a52c-16d8-402b-9c7c-597af29ee8e2] Pending
helpers_test.go:344: "sp-pod" [3a76a52c-16d8-402b-9c7c-597af29ee8e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a76a52c-16d8-402b-9c7c-597af29ee8e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002515013s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-084000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh -n functional-084000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cp functional-084000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd624769291/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh -n functional-084000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh -n functional-084000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-084000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-282fr" [10c2f313-c0b5-4fed-82fc-84df6a764d60] Pending
helpers_test.go:344: "mysql-6cdb49bbb-282fr" [10c2f313-c0b5-4fed-82fc-84df6a764d60] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-282fr" [10c2f313-c0b5-4fed-82fc-84df6a764d60] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004249559s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;": exit status 1 (170.0667ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 15:26:14.947723   17821 retry.go:31] will retry after 868.627853ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;": exit status 1 (113.672266ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 15:26:15.931089   17821 retry.go:31] will retry after 1.09406225s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;": exit status 1 (110.286172ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1204 15:26:17.137049   17821 retry.go:31] will retry after 1.306614569s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-084000 exec mysql-6cdb49bbb-282fr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17821/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /etc/test/nested/copy/17821/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17821.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /etc/ssl/certs/17821.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17821.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /usr/share/ca-certificates/17821.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/178212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /etc/ssl/certs/178212.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/178212.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /usr/share/ca-certificates/178212.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-084000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "sudo systemctl is-active crio": exit status 1 (156.126368ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-084000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-084000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-084000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-084000 image ls --format short --alsologtostderr:
I1204 15:27:00.656553   19160 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:00.656923   19160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:00.656928   19160 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:00.656932   19160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:00.657122   19160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
I1204 15:27:00.657804   19160 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:00.657897   19160 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:00.658261   19160 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:00.658345   19160 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:00.669378   19160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57597
I1204 15:27:00.669787   19160 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:00.670197   19160 main.go:141] libmachine: Using API Version  1
I1204 15:27:00.670224   19160 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:00.670472   19160 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:00.670593   19160 main.go:141] libmachine: (functional-084000) Calling .GetState
I1204 15:27:00.670686   19160 main.go:141] libmachine: (functional-084000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1204 15:27:00.670755   19160 main.go:141] libmachine: (functional-084000) DBG | hyperkit pid from json: 18428
I1204 15:27:00.672200   19160 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:00.672224   19160 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:00.683522   19160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57599
I1204 15:27:00.683886   19160 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:00.684246   19160 main.go:141] libmachine: Using API Version  1
I1204 15:27:00.684257   19160 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:00.684564   19160 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:00.684700   19160 main.go:141] libmachine: (functional-084000) Calling .DriverName
I1204 15:27:00.685555   19160 ssh_runner.go:195] Run: systemctl --version
I1204 15:27:00.685629   19160 main.go:141] libmachine: (functional-084000) Calling .GetSSHHostname
I1204 15:27:00.686015   19160 main.go:141] libmachine: (functional-084000) Calling .GetSSHPort
I1204 15:27:00.686297   19160 main.go:141] libmachine: (functional-084000) Calling .GetSSHKeyPath
I1204 15:27:00.686449   19160 main.go:141] libmachine: (functional-084000) Calling .GetSSHUsername
I1204 15:27:00.686557   19160 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/functional-084000/id_rsa Username:docker}
I1204 15:27:00.721096   19160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1204 15:27:00.737339   19160 main.go:141] libmachine: Making call to close driver server
I1204 15:27:00.737349   19160 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:00.737504   19160 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:00.737505   19160 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:00.737517   19160 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:00.737524   19160 main.go:141] libmachine: Making call to close driver server
I1204 15:27:00.737528   19160 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:00.737687   19160 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:00.737696   19160 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:00.737697   19160 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-084000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-084000 | dca3d39db99a8 | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-084000 | f212720eb4f7a | 1.24MB |
| docker.io/library/nginx                     | alpine            | 91ca84b4f5779 | 52.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.2           | 9499c9960544e | 94.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-084000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.2           | 847c7bc1a5418 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 0486b6c53a1b5 | 88.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/nginx                     | latest            | 66f8bdd3810c9 | 192MB  |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 505d571f5fd56 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-084000 image ls --format table --alsologtostderr:
I1204 15:27:04.780605   19258 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:04.781369   19258 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:04.781378   19258 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:04.781384   19258 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:04.781790   19258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
I1204 15:27:04.782618   19258 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:04.782715   19258 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:04.783066   19258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:04.783123   19258 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:04.794177   19258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57689
I1204 15:27:04.794611   19258 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:04.795036   19258 main.go:141] libmachine: Using API Version  1
I1204 15:27:04.795044   19258 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:04.795314   19258 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:04.795475   19258 main.go:141] libmachine: (functional-084000) Calling .GetState
I1204 15:27:04.795586   19258 main.go:141] libmachine: (functional-084000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1204 15:27:04.795658   19258 main.go:141] libmachine: (functional-084000) DBG | hyperkit pid from json: 18428
I1204 15:27:04.797381   19258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:04.797417   19258 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:04.808982   19258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57691
I1204 15:27:04.809334   19258 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:04.809699   19258 main.go:141] libmachine: Using API Version  1
I1204 15:27:04.809718   19258 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:04.809941   19258 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:04.810049   19258 main.go:141] libmachine: (functional-084000) Calling .DriverName
I1204 15:27:04.810224   19258 ssh_runner.go:195] Run: systemctl --version
I1204 15:27:04.810242   19258 main.go:141] libmachine: (functional-084000) Calling .GetSSHHostname
I1204 15:27:04.810328   19258 main.go:141] libmachine: (functional-084000) Calling .GetSSHPort
I1204 15:27:04.810406   19258 main.go:141] libmachine: (functional-084000) Calling .GetSSHKeyPath
I1204 15:27:04.810492   19258 main.go:141] libmachine: (functional-084000) Calling .GetSSHUsername
I1204 15:27:04.810584   19258 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/functional-084000/id_rsa Username:docker}
I1204 15:27:04.844067   19258 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1204 15:27:04.862149   19258 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.862158   19258 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.862283   19258 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:04.862311   19258 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.862322   19258 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:04.862329   19258 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.862336   19258 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.862453   19258 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.862463   19258 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:04.862472   19258 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-084000 image ls --format json --alsologtostderr:
[{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"88400000"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67400000"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faa
ac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"94200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-084000"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysq
l:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"f212720eb4f7a2adbf667adc681d9910f4f1305a81c2c5192b04c714d07a0c5d","repoDigests":[],"repoTags":["localhost/my-image:functional-084000"],"size":"1240000"},{"id":"dca3d39db99a81b557625e8cf1dbf14c2247a99581ac31e1a058f498d98aadc6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-084000"],"size":"30"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"4
3800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-084000 image ls --format json --alsologtostderr:
I1204 15:27:04.607637   19249 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:04.607871   19249 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:04.607877   19249 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:04.607881   19249 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:04.608059   19249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
I1204 15:27:04.608717   19249 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:04.608814   19249 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:04.609206   19249 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:04.609239   19249 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:04.620376   19249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57677
I1204 15:27:04.620770   19249 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:04.621198   19249 main.go:141] libmachine: Using API Version  1
I1204 15:27:04.621212   19249 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:04.621454   19249 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:04.621581   19249 main.go:141] libmachine: (functional-084000) Calling .GetState
I1204 15:27:04.621697   19249 main.go:141] libmachine: (functional-084000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1204 15:27:04.621766   19249 main.go:141] libmachine: (functional-084000) DBG | hyperkit pid from json: 18428
I1204 15:27:04.623310   19249 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:04.623340   19249 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:04.634473   19249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57679
I1204 15:27:04.634823   19249 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:04.635156   19249 main.go:141] libmachine: Using API Version  1
I1204 15:27:04.635165   19249 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:04.635427   19249 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:04.635548   19249 main.go:141] libmachine: (functional-084000) Calling .DriverName
I1204 15:27:04.635714   19249 ssh_runner.go:195] Run: systemctl --version
I1204 15:27:04.635735   19249 main.go:141] libmachine: (functional-084000) Calling .GetSSHHostname
I1204 15:27:04.635849   19249 main.go:141] libmachine: (functional-084000) Calling .GetSSHPort
I1204 15:27:04.635955   19249 main.go:141] libmachine: (functional-084000) Calling .GetSSHKeyPath
I1204 15:27:04.636092   19249 main.go:141] libmachine: (functional-084000) Calling .GetSSHUsername
I1204 15:27:04.636280   19249 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/functional-084000/id_rsa Username:docker}
I1204 15:27:04.670554   19249 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1204 15:27:04.687539   19249 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.687547   19249 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.687706   19249 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.687718   19249 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:04.687724   19249 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.687728   19249 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.687904   19249 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.687915   19249 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:04.687938   19249 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-084000 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: dca3d39db99a81b557625e8cf1dbf14c2247a99581ac31e1a058f498d98aadc6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-084000
size: "30"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "94200000"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "91500000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-084000
size: "4940000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-084000 image ls --format yaml --alsologtostderr:
I1204 15:27:00.835189   19164 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:00.835462   19164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:00.835468   19164 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:00.835472   19164 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:00.835673   19164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
I1204 15:27:00.836375   19164 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:00.836467   19164 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:00.836835   19164 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:00.836878   19164 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:00.848570   19164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57602
I1204 15:27:00.849022   19164 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:00.849468   19164 main.go:141] libmachine: Using API Version  1
I1204 15:27:00.849480   19164 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:00.849719   19164 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:00.849838   19164 main.go:141] libmachine: (functional-084000) Calling .GetState
I1204 15:27:00.849947   19164 main.go:141] libmachine: (functional-084000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1204 15:27:00.850007   19164 main.go:141] libmachine: (functional-084000) DBG | hyperkit pid from json: 18428
I1204 15:27:00.851629   19164 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:00.851664   19164 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:00.862973   19164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57604
I1204 15:27:00.863319   19164 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:00.863694   19164 main.go:141] libmachine: Using API Version  1
I1204 15:27:00.863711   19164 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:00.863949   19164 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:00.864063   19164 main.go:141] libmachine: (functional-084000) Calling .DriverName
I1204 15:27:00.864244   19164 ssh_runner.go:195] Run: systemctl --version
I1204 15:27:00.864263   19164 main.go:141] libmachine: (functional-084000) Calling .GetSSHHostname
I1204 15:27:00.864347   19164 main.go:141] libmachine: (functional-084000) Calling .GetSSHPort
I1204 15:27:00.864445   19164 main.go:141] libmachine: (functional-084000) Calling .GetSSHKeyPath
I1204 15:27:00.864545   19164 main.go:141] libmachine: (functional-084000) Calling .GetSSHUsername
I1204 15:27:00.864626   19164 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/functional-084000/id_rsa Username:docker}
I1204 15:27:00.898699   19164 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1204 15:27:00.921471   19164 main.go:141] libmachine: Making call to close driver server
I1204 15:27:00.921479   19164 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:00.921628   19164 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:00.921634   19164 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:00.921640   19164 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:00.921647   19164 main.go:141] libmachine: Making call to close driver server
I1204 15:27:00.921651   19164 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:00.921817   19164 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:00.921830   19164 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:00.921875   19164 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh pgrep buildkitd: exit status 1 (172.75935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image build -t localhost/my-image:functional-084000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-084000 image build -t localhost/my-image:functional-084000 testdata/build --alsologtostderr: (3.246538448s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-084000 image build -t localhost/my-image:functional-084000 testdata/build --alsologtostderr:
I1204 15:27:01.219206   19182 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:01.219768   19182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:01.219774   19182 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:01.219778   19182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:01.219954   19182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
I1204 15:27:01.220598   19182 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:01.221285   19182 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:01.221629   19182 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:01.221665   19182 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:01.234740   19182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57614
I1204 15:27:01.235288   19182 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:01.235781   19182 main.go:141] libmachine: Using API Version  1
I1204 15:27:01.235795   19182 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:01.236121   19182 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:01.236262   19182 main.go:141] libmachine: (functional-084000) Calling .GetState
I1204 15:27:01.236420   19182 main.go:141] libmachine: (functional-084000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1204 15:27:01.236445   19182 main.go:141] libmachine: (functional-084000) DBG | hyperkit pid from json: 18428
I1204 15:27:01.238007   19182 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1204 15:27:01.238032   19182 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1204 15:27:01.253656   19182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57616
I1204 15:27:01.254424   19182 main.go:141] libmachine: () Calling .GetVersion
I1204 15:27:01.255045   19182 main.go:141] libmachine: Using API Version  1
I1204 15:27:01.255096   19182 main.go:141] libmachine: () Calling .SetConfigRaw
I1204 15:27:01.255889   19182 main.go:141] libmachine: () Calling .GetMachineName
I1204 15:27:01.256187   19182 main.go:141] libmachine: (functional-084000) Calling .DriverName
I1204 15:27:01.256474   19182 ssh_runner.go:195] Run: systemctl --version
I1204 15:27:01.256503   19182 main.go:141] libmachine: (functional-084000) Calling .GetSSHHostname
I1204 15:27:01.256791   19182 main.go:141] libmachine: (functional-084000) Calling .GetSSHPort
I1204 15:27:01.257028   19182 main.go:141] libmachine: (functional-084000) Calling .GetSSHKeyPath
I1204 15:27:01.257240   19182 main.go:141] libmachine: (functional-084000) Calling .GetSSHUsername
I1204 15:27:01.257433   19182 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/functional-084000/id_rsa Username:docker}
I1204 15:27:01.296399   19182 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.622265006.tar
I1204 15:27:01.296538   19182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 15:27:01.310694   19182 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.622265006.tar
I1204 15:27:01.314128   19182 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.622265006.tar: stat -c "%s %y" /var/lib/minikube/build/build.622265006.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.622265006.tar': No such file or directory
I1204 15:27:01.314158   19182 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.622265006.tar --> /var/lib/minikube/build/build.622265006.tar (3072 bytes)
I1204 15:27:01.337381   19182 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.622265006
I1204 15:27:01.346779   19182 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.622265006 -xf /var/lib/minikube/build/build.622265006.tar
I1204 15:27:01.357014   19182 docker.go:360] Building image: /var/lib/minikube/build/build.622265006
I1204 15:27:01.357099   19182 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-084000 /var/lib/minikube/build/build.622265006
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f212720eb4f7a2adbf667adc681d9910f4f1305a81c2c5192b04c714d07a0c5d done
#8 naming to localhost/my-image:functional-084000 done
#8 DONE 0.0s
I1204 15:27:04.322291   19182 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-084000 /var/lib/minikube/build/build.622265006: (2.965101422s)
I1204 15:27:04.322374   19182 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.622265006
I1204 15:27:04.330644   19182 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.622265006.tar
I1204 15:27:04.339722   19182 build_images.go:217] Built localhost/my-image:functional-084000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.622265006.tar
I1204 15:27:04.339745   19182 build_images.go:133] succeeded building to: functional-084000
I1204 15:27:04.339756   19182 build_images.go:134] failed building to: 
I1204 15:27:04.339775   19182 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.339781   19182 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.339952   19182 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:04.339959   19182 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.339966   19182 main.go:141] libmachine: Making call to close connection to plugin binary
I1204 15:27:04.339974   19182 main.go:141] libmachine: Making call to close driver server
I1204 15:27:04.339979   19182 main.go:141] libmachine: (functional-084000) Calling .Close
I1204 15:27:04.340112   19182 main.go:141] libmachine: (functional-084000) DBG | Closing plugin on server side
I1204 15:27:04.340132   19182 main.go:141] libmachine: Successfully made call to close driver server
I1204 15:27:04.340140   19182 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.728225407s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-084000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-084000 docker-env) && out/minikube-darwin-amd64 status -p functional-084000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-084000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image load --daemon kicbase/echo-server:functional-084000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image load --daemon kicbase/echo-server:functional-084000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-084000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image load --daemon kicbase/echo-server:functional-084000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image save kicbase/echo-server:functional-084000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image rm kicbase/echo-server:functional-084000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-084000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 image save --daemon kicbase/echo-server:functional-084000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-084000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 18834: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-084000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0ef17b70-c5e1-47fd-8b7b-d92a337dd69b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0ef17b70-c5e1-47fd-8b7b-d92a337dd69b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.004480388s
I1204 15:26:18.143887   17821 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-084000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.83.146 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1204 15:26:18.244793   17821 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1204 15:26:18.324214   17821 config.go:182] Loaded profile config "functional-084000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-084000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-084000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-084000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-httfg" [e519cb20-5c8e-421f-b661-22ee3fd9bccc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-httfg" [e519cb20-5c8e-421f-b661-22ee3fd9bccc] Running
E1204 15:26:36.484988   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005375933s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service list -o json
functional_test.go:1494: Took "795.368693ms" to run "out/minikube-darwin-amd64 -p functional-084000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:32655
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:32655
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "232.195475ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "93.457786ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "231.727738ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "93.721501ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2250991209/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733354802345724000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2250991209/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733354802345724000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2250991209/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733354802345724000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2250991209/001/test-1733354802345724000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.439402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:42.519071   17821 retry.go:31] will retry after 522.110838ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 23:26 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 23:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 23:26 test-1733354802345724000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh cat /mount-9p/test-1733354802345724000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-084000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [df66def3-b403-44e5-9a36-54f4c19771bc] Pending
helpers_test.go:344: "busybox-mount" [df66def3-b403-44e5-9a36-54f4c19771bc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [df66def3-b403-44e5-9a36-54f4c19771bc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [df66def3-b403-44e5-9a36-54f4c19771bc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003822875s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-084000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2250991209/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount1: exit status 1 (191.46118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:27:02.836378   17821 retry.go:31] will retry after 342.352583ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount1: exit status 1 (218.601627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:27:03.398334   17821 retry.go:31] will retry after 1.031161622s: exit status 1
E1204 15:27:04.206438   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-084000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup442221110/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-084000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-084000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-084000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-098000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-098000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m16.686645666s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-098000 -- rollout status deployment/busybox: (3.045293308s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-fvhj6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-tkk5l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-xtv76 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-fvhj6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-tkk5l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-xtv76 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-fvhj6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-tkk5l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-xtv76 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-fvhj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-fvhj6 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-tkk5l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-tkk5l -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-xtv76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-098000 -- exec busybox-7dff88458-xtv76 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-098000 -v=7 --alsologtostderr
E1204 15:30:53.781890   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:53.788170   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:53.799426   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:53.820750   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:53.863804   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:53.944945   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:54.107861   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:54.430788   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:55.072197   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:56.354112   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:30:58.916148   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:31:04.038205   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:31:14.283715   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-098000 -v=7 --alsologtostderr: (49.052966592s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-098000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp testdata/cp-test.txt ha-098000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000:/home/docker/cp-test.txt ha-098000-m02:/home/docker/cp-test_ha-098000_ha-098000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test_ha-098000_ha-098000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000:/home/docker/cp-test.txt ha-098000-m03:/home/docker/cp-test_ha-098000_ha-098000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test_ha-098000_ha-098000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000:/home/docker/cp-test.txt ha-098000-m04:/home/docker/cp-test_ha-098000_ha-098000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test_ha-098000_ha-098000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp testdata/cp-test.txt ha-098000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m02:/home/docker/cp-test.txt ha-098000:/home/docker/cp-test_ha-098000-m02_ha-098000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test_ha-098000-m02_ha-098000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m02:/home/docker/cp-test.txt ha-098000-m03:/home/docker/cp-test_ha-098000-m02_ha-098000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test_ha-098000-m02_ha-098000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m02:/home/docker/cp-test.txt ha-098000-m04:/home/docker/cp-test_ha-098000-m02_ha-098000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test_ha-098000-m02_ha-098000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp testdata/cp-test.txt ha-098000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt ha-098000:/home/docker/cp-test_ha-098000-m03_ha-098000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test_ha-098000-m03_ha-098000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt ha-098000-m02:/home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test_ha-098000-m03_ha-098000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m03:/home/docker/cp-test.txt ha-098000-m04:/home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test_ha-098000-m03_ha-098000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp testdata/cp-test.txt ha-098000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3261314918/001/cp-test_ha-098000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt ha-098000:/home/docker/cp-test_ha-098000-m04_ha-098000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000 "sudo cat /home/docker/cp-test_ha-098000-m04_ha-098000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt ha-098000-m02:/home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt
E1204 15:31:34.771327   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m02 "sudo cat /home/docker/cp-test_ha-098000-m04_ha-098000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 cp ha-098000-m04:/home/docker/cp-test.txt ha-098000-m03:/home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 ssh -n ha-098000-m03 "sudo cat /home/docker/cp-test_ha-098000-m04_ha-098000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 node stop m02 -v=7 --alsologtostderr
E1204 15:31:36.499113   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 node stop m02 -v=7 --alsologtostderr: (8.3617597s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr: exit status 7 (406.346085ms)

                                                
                                                
-- stdout --
	ha-098000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-098000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:44.200400   20106 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:44.200744   20106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:44.200750   20106 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:44.200754   20106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:44.200941   20106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:31:44.201128   20106 out.go:352] Setting JSON to false
	I1204 15:31:44.201149   20106 mustload.go:65] Loading cluster: ha-098000
	I1204 15:31:44.201184   20106 notify.go:220] Checking for updates...
	I1204 15:31:44.201509   20106 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:44.201531   20106 status.go:174] checking status of ha-098000 ...
	I1204 15:31:44.201988   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.202032   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.213296   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58439
	I1204 15:31:44.213583   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.213988   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.213999   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.214205   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.214303   20106 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:31:44.214404   20106 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:31:44.214493   20106 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 19294
	I1204 15:31:44.215741   20106 status.go:371] ha-098000 host status = "Running" (err=<nil>)
	I1204 15:31:44.215758   20106 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:31:44.216021   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.216041   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.227038   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58441
	I1204 15:31:44.227388   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.231806   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.231833   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.232107   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.232226   20106 main.go:141] libmachine: (ha-098000) Calling .GetIP
	I1204 15:31:44.232327   20106 host.go:66] Checking if "ha-098000" exists ...
	I1204 15:31:44.232593   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.232614   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.243503   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58443
	I1204 15:31:44.243823   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.244161   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.244175   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.244397   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.244508   20106 main.go:141] libmachine: (ha-098000) Calling .DriverName
	I1204 15:31:44.244666   20106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:31:44.244686   20106 main.go:141] libmachine: (ha-098000) Calling .GetSSHHostname
	I1204 15:31:44.244774   20106 main.go:141] libmachine: (ha-098000) Calling .GetSSHPort
	I1204 15:31:44.244856   20106 main.go:141] libmachine: (ha-098000) Calling .GetSSHKeyPath
	I1204 15:31:44.244939   20106 main.go:141] libmachine: (ha-098000) Calling .GetSSHUsername
	I1204 15:31:44.245027   20106 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000/id_rsa Username:docker}
	I1204 15:31:44.286189   20106 ssh_runner.go:195] Run: systemctl --version
	I1204 15:31:44.290580   20106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:31:44.302121   20106 kubeconfig.go:125] found "ha-098000" server: "https://192.169.0.254:8443"
	I1204 15:31:44.302145   20106 api_server.go:166] Checking apiserver status ...
	I1204 15:31:44.302194   20106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:31:44.313431   20106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1968/cgroup
	W1204 15:31:44.320928   20106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1968/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:31:44.321009   20106 ssh_runner.go:195] Run: ls
	I1204 15:31:44.324226   20106 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1204 15:31:44.328452   20106 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1204 15:31:44.328466   20106 status.go:463] ha-098000 apiserver status = Running (err=<nil>)
	I1204 15:31:44.328473   20106 status.go:176] ha-098000 status: &{Name:ha-098000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:31:44.328486   20106 status.go:174] checking status of ha-098000-m02 ...
	I1204 15:31:44.328768   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.328791   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.339979   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58447
	I1204 15:31:44.340313   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.340666   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.340683   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.340910   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.341011   20106 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:31:44.341108   20106 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:31:44.341184   20106 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 19317
	I1204 15:31:44.342367   20106 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 19317 missing from process table
	I1204 15:31:44.342409   20106 status.go:371] ha-098000-m02 host status = "Stopped" (err=<nil>)
	I1204 15:31:44.342418   20106 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:44.342422   20106 status.go:176] ha-098000-m02 status: &{Name:ha-098000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:31:44.342432   20106 status.go:174] checking status of ha-098000-m03 ...
	I1204 15:31:44.342708   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.342735   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.353855   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58449
	I1204 15:31:44.354180   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.354496   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.354505   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.354732   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.354828   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetState
	I1204 15:31:44.354915   20106 main.go:141] libmachine: (ha-098000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:31:44.355000   20106 main.go:141] libmachine: (ha-098000-m03) DBG | hyperkit pid from json: 19347
	I1204 15:31:44.356212   20106 status.go:371] ha-098000-m03 host status = "Running" (err=<nil>)
	I1204 15:31:44.356221   20106 host.go:66] Checking if "ha-098000-m03" exists ...
	I1204 15:31:44.356489   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.356514   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.367341   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58451
	I1204 15:31:44.367658   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.367984   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.367994   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.368228   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.368332   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetIP
	I1204 15:31:44.368435   20106 host.go:66] Checking if "ha-098000-m03" exists ...
	I1204 15:31:44.368701   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.368730   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.379689   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58453
	I1204 15:31:44.380022   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.380382   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.380398   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.380645   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.380769   20106 main.go:141] libmachine: (ha-098000-m03) Calling .DriverName
	I1204 15:31:44.380929   20106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:31:44.380940   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHHostname
	I1204 15:31:44.381040   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHPort
	I1204 15:31:44.381126   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHKeyPath
	I1204 15:31:44.381216   20106 main.go:141] libmachine: (ha-098000-m03) Calling .GetSSHUsername
	I1204 15:31:44.381298   20106 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m03/id_rsa Username:docker}
	I1204 15:31:44.416275   20106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:31:44.428197   20106 kubeconfig.go:125] found "ha-098000" server: "https://192.169.0.254:8443"
	I1204 15:31:44.428212   20106 api_server.go:166] Checking apiserver status ...
	I1204 15:31:44.428270   20106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:31:44.439782   20106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W1204 15:31:44.447907   20106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:31:44.447962   20106 ssh_runner.go:195] Run: ls
	I1204 15:31:44.451130   20106 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1204 15:31:44.454220   20106 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1204 15:31:44.454231   20106 status.go:463] ha-098000-m03 apiserver status = Running (err=<nil>)
	I1204 15:31:44.454236   20106 status.go:176] ha-098000-m03 status: &{Name:ha-098000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:31:44.454244   20106 status.go:174] checking status of ha-098000-m04 ...
	I1204 15:31:44.454528   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.454548   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.465607   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58457
	I1204 15:31:44.465952   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.466291   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.466300   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.466517   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.466614   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:31:44.466713   20106 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:31:44.466807   20106 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 19762
	I1204 15:31:44.468026   20106 status.go:371] ha-098000-m04 host status = "Running" (err=<nil>)
	I1204 15:31:44.468035   20106 host.go:66] Checking if "ha-098000-m04" exists ...
	I1204 15:31:44.468286   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.468315   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.479388   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58459
	I1204 15:31:44.479696   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.480056   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.480074   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.480288   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.480391   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetIP
	I1204 15:31:44.480487   20106 host.go:66] Checking if "ha-098000-m04" exists ...
	I1204 15:31:44.480779   20106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:31:44.480799   20106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:31:44.491822   20106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58461
	I1204 15:31:44.492153   20106 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:31:44.492582   20106 main.go:141] libmachine: Using API Version  1
	I1204 15:31:44.492600   20106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:31:44.492835   20106 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:31:44.492954   20106 main.go:141] libmachine: (ha-098000-m04) Calling .DriverName
	I1204 15:31:44.493123   20106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:31:44.493134   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHHostname
	I1204 15:31:44.493230   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHPort
	I1204 15:31:44.493322   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHKeyPath
	I1204 15:31:44.493438   20106 main.go:141] libmachine: (ha-098000-m04) Calling .GetSSHUsername
	I1204 15:31:44.493521   20106 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/ha-098000-m04/id_rsa Username:docker}
	I1204 15:31:44.525159   20106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:31:44.536331   20106 status.go:176] ha-098000-m04 status: &{Name:ha-098000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 node start m02 -v=7 --alsologtostderr
E1204 15:32:15.737181   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 node start m02 -v=7 --alsologtostderr: (39.743359634s)
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 stop -v=7 --alsologtostderr
E1204 15:36:36.509565   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-darwin-amd64 -p ha-098000 stop -v=7 --alsologtostderr: (24.869379397s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr: exit status 7 (116.676832ms)

                                                
                                                
-- stdout --
	ha-098000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-098000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:36:49.485054   20381 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:36:49.485365   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:49.485370   20381 out.go:358] Setting ErrFile to fd 2...
	I1204 15:36:49.485374   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:49.485577   20381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:36:49.485754   20381 out.go:352] Setting JSON to false
	I1204 15:36:49.485776   20381 mustload.go:65] Loading cluster: ha-098000
	I1204 15:36:49.485814   20381 notify.go:220] Checking for updates...
	I1204 15:36:49.486135   20381 config.go:182] Loaded profile config "ha-098000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:36:49.486160   20381 status.go:174] checking status of ha-098000 ...
	I1204 15:36:49.486603   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:49.486657   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:49.497985   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58871
	I1204 15:36:49.498300   20381 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:49.498717   20381 main.go:141] libmachine: Using API Version  1
	I1204 15:36:49.498728   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:49.499002   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:49.499143   20381 main.go:141] libmachine: (ha-098000) Calling .GetState
	I1204 15:36:49.499258   20381 main.go:141] libmachine: (ha-098000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:49.499318   20381 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid from json: 20209
	I1204 15:36:49.500435   20381 main.go:141] libmachine: (ha-098000) DBG | hyperkit pid 20209 missing from process table
	I1204 15:36:49.500475   20381 status.go:371] ha-098000 host status = "Stopped" (err=<nil>)
	I1204 15:36:49.500484   20381 status.go:384] host is not running, skipping remaining checks
	I1204 15:36:49.500490   20381 status.go:176] ha-098000 status: &{Name:ha-098000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:36:49.500513   20381 status.go:174] checking status of ha-098000-m02 ...
	I1204 15:36:49.500793   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:49.500817   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:49.514523   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58873
	I1204 15:36:49.514848   20381 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:49.515216   20381 main.go:141] libmachine: Using API Version  1
	I1204 15:36:49.515233   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:49.515471   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:49.515577   20381 main.go:141] libmachine: (ha-098000-m02) Calling .GetState
	I1204 15:36:49.515684   20381 main.go:141] libmachine: (ha-098000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:49.515751   20381 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid from json: 20220
	I1204 15:36:49.516890   20381 main.go:141] libmachine: (ha-098000-m02) DBG | hyperkit pid 20220 missing from process table
	I1204 15:36:49.516931   20381 status.go:371] ha-098000-m02 host status = "Stopped" (err=<nil>)
	I1204 15:36:49.516938   20381 status.go:384] host is not running, skipping remaining checks
	I1204 15:36:49.516942   20381 status.go:176] ha-098000-m02 status: &{Name:ha-098000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:36:49.516957   20381 status.go:174] checking status of ha-098000-m04 ...
	I1204 15:36:49.517237   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:36:49.517260   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:36:49.528210   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58875
	I1204 15:36:49.528541   20381 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:36:49.528870   20381 main.go:141] libmachine: Using API Version  1
	I1204 15:36:49.528883   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:36:49.529109   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:36:49.529217   20381 main.go:141] libmachine: (ha-098000-m04) Calling .GetState
	I1204 15:36:49.529361   20381 main.go:141] libmachine: (ha-098000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:36:49.529447   20381 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid from json: 20252
	I1204 15:36:49.530570   20381 main.go:141] libmachine: (ha-098000-m04) DBG | hyperkit pid 20252 missing from process table
	I1204 15:36:49.530589   20381 status.go:371] ha-098000-m04 host status = "Stopped" (err=<nil>)
	I1204 15:36:49.530595   20381 status.go:384] host is not running, skipping remaining checks
	I1204 15:36:49.530598   20381 status.go:176] ha-098000-m04 status: &{Name:ha-098000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (164.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-098000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E1204 15:37:59.594278   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-darwin-amd64 start -p ha-098000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m43.885515882s)
ha_test.go:568: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (164.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-098000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-098000 --control-plane -v=7 --alsologtostderr: (1m14.672271375s)
ha_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p ha-098000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-299000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-299000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m24.611983566s)
--- PASS: TestJSONOutput/start/Command (84.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-299000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-299000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-299000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-299000 --output=json --user=testUser: (8.347407181s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.63s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-438000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-438000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (378.954207ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"acbab3b2-edbf-4c58-b17d-d65ce53a1511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-438000] minikube v1.34.0 on Darwin 15.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ce04519-a9a2-480d-9b77-4afbcbf4a1f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"b3a552d0-d441-4bac-9107-0bb9056846fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig"}}
	{"specversion":"1.0","id":"12db3511-abef-41e8-a97d-f2fcf0805176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7dd276a8-8341-4cf1-9858-e95e78a832b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0694a0b3-6244-4fda-ad44-4a392414c841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube"}}
	{"specversion":"1.0","id":"97eb4589-c1fa-48f5-96c5-75175e7adbf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"380f40d7-30ac-4c32-9ccf-0959ae2824da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-438000
--- PASS: TestErrorJSONOutput (0.63s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (88.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-313000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-313000 --driver=hyperkit : (37.658063761s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-323000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-323000 --driver=hyperkit : (41.518428899s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-313000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-323000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-323000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-323000: (3.437762679s)
helpers_test.go:175: Cleaning up "first-313000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-313000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-313000: (5.275176187s)
--- PASS: TestMinikubeProfile (88.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-802000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-802000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m49.395690486s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-802000 -- rollout status deployment/busybox: (3.006052976s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-djjdz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-qqqm9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-djjdz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-qqqm9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-djjdz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-qqqm9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-djjdz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-djjdz -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-qqqm9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-802000 -- exec busybox-7dff88458-qqqm9 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-802000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-802000 -v 3 --alsologtostderr: (45.164637295s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-802000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp testdata/cp-test.txt multinode-802000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile569237729/001/cp-test_multinode-802000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000:/home/docker/cp-test.txt multinode-802000-m02:/home/docker/cp-test_multinode-802000_multinode-802000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test_multinode-802000_multinode-802000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000:/home/docker/cp-test.txt multinode-802000-m03:/home/docker/cp-test_multinode-802000_multinode-802000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test_multinode-802000_multinode-802000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp testdata/cp-test.txt multinode-802000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile569237729/001/cp-test_multinode-802000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m02:/home/docker/cp-test.txt multinode-802000:/home/docker/cp-test_multinode-802000-m02_multinode-802000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test_multinode-802000-m02_multinode-802000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m02:/home/docker/cp-test.txt multinode-802000-m03:/home/docker/cp-test_multinode-802000-m02_multinode-802000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test_multinode-802000-m02_multinode-802000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp testdata/cp-test.txt multinode-802000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile569237729/001/cp-test_multinode-802000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m03:/home/docker/cp-test.txt multinode-802000:/home/docker/cp-test_multinode-802000-m03_multinode-802000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000 "sudo cat /home/docker/cp-test_multinode-802000-m03_multinode-802000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 cp multinode-802000-m03:/home/docker/cp-test.txt multinode-802000-m02:/home/docker/cp-test_multinode-802000-m03_multinode-802000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 ssh -n multinode-802000-m02 "sudo cat /home/docker/cp-test_multinode-802000-m03_multinode-802000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 node stop m03
E1204 15:50:53.923276   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-802000 node stop m03: (2.357520486s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-802000 status: exit status 7 (279.9664ms)

                                                
                                                
-- stdout --
	multinode-802000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-802000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-802000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr: exit status 7 (279.746471ms)

                                                
                                                
-- stdout --
	multinode-802000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-802000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-802000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:50:56.011156   21215 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:50:56.011373   21215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:50:56.011378   21215 out.go:358] Setting ErrFile to fd 2...
	I1204 15:50:56.011382   21215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:50:56.011557   21215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:50:56.011746   21215 out.go:352] Setting JSON to false
	I1204 15:50:56.011767   21215 mustload.go:65] Loading cluster: multinode-802000
	I1204 15:50:56.011806   21215 notify.go:220] Checking for updates...
	I1204 15:50:56.012143   21215 config.go:182] Loaded profile config "multinode-802000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:50:56.012167   21215 status.go:174] checking status of multinode-802000 ...
	I1204 15:50:56.012575   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.012617   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.024263   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59862
	I1204 15:50:56.024592   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.024987   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.024997   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.025254   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.025369   21215 main.go:141] libmachine: (multinode-802000) Calling .GetState
	I1204 15:50:56.025475   21215 main.go:141] libmachine: (multinode-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:50:56.025534   21215 main.go:141] libmachine: (multinode-802000) DBG | hyperkit pid from json: 20896
	I1204 15:50:56.026919   21215 status.go:371] multinode-802000 host status = "Running" (err=<nil>)
	I1204 15:50:56.026934   21215 host.go:66] Checking if "multinode-802000" exists ...
	I1204 15:50:56.027190   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.027212   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.042114   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59864
	I1204 15:50:56.042531   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.042911   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.042922   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.043145   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.043263   21215 main.go:141] libmachine: (multinode-802000) Calling .GetIP
	I1204 15:50:56.043354   21215 host.go:66] Checking if "multinode-802000" exists ...
	I1204 15:50:56.043608   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.043632   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.054650   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59866
	I1204 15:50:56.054962   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.055283   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.055298   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.055518   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.055620   21215 main.go:141] libmachine: (multinode-802000) Calling .DriverName
	I1204 15:50:56.055784   21215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:50:56.055815   21215 main.go:141] libmachine: (multinode-802000) Calling .GetSSHHostname
	I1204 15:50:56.055922   21215 main.go:141] libmachine: (multinode-802000) Calling .GetSSHPort
	I1204 15:50:56.056027   21215 main.go:141] libmachine: (multinode-802000) Calling .GetSSHKeyPath
	I1204 15:50:56.056121   21215 main.go:141] libmachine: (multinode-802000) Calling .GetSSHUsername
	I1204 15:50:56.056243   21215 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/multinode-802000/id_rsa Username:docker}
	I1204 15:50:56.086207   21215 ssh_runner.go:195] Run: systemctl --version
	I1204 15:50:56.090628   21215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:50:56.101407   21215 kubeconfig.go:125] found "multinode-802000" server: "https://192.169.0.14:8443"
	I1204 15:50:56.101431   21215 api_server.go:166] Checking apiserver status ...
	I1204 15:50:56.101479   21215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:50:56.112366   21215 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1936/cgroup
	W1204 15:50:56.120442   21215 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1936/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:50:56.120496   21215 ssh_runner.go:195] Run: ls
	I1204 15:50:56.123774   21215 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I1204 15:50:56.126817   21215 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I1204 15:50:56.126827   21215 status.go:463] multinode-802000 apiserver status = Running (err=<nil>)
	I1204 15:50:56.126835   21215 status.go:176] multinode-802000 status: &{Name:multinode-802000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:50:56.126847   21215 status.go:174] checking status of multinode-802000-m02 ...
	I1204 15:50:56.127095   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.127117   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.138247   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59870
	I1204 15:50:56.138573   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.138914   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.138923   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.139135   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.139219   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetState
	I1204 15:50:56.139309   21215 main.go:141] libmachine: (multinode-802000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:50:56.139377   21215 main.go:141] libmachine: (multinode-802000-m02) DBG | hyperkit pid from json: 20919
	I1204 15:50:56.140774   21215 status.go:371] multinode-802000-m02 host status = "Running" (err=<nil>)
	I1204 15:50:56.140783   21215 host.go:66] Checking if "multinode-802000-m02" exists ...
	I1204 15:50:56.141039   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.141068   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.151953   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59872
	I1204 15:50:56.152343   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.152704   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.152721   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.152945   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.153049   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetIP
	I1204 15:50:56.153151   21215 host.go:66] Checking if "multinode-802000-m02" exists ...
	I1204 15:50:56.153418   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.153439   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.164193   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59874
	I1204 15:50:56.164518   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.164879   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.164895   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.165105   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.165198   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .DriverName
	I1204 15:50:56.165371   21215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 15:50:56.165384   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetSSHHostname
	I1204 15:50:56.165476   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetSSHPort
	I1204 15:50:56.165566   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetSSHKeyPath
	I1204 15:50:56.165671   21215 main.go:141] libmachine: (multinode-802000-m02) Calling .GetSSHUsername
	I1204 15:50:56.165759   21215 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20045-17258/.minikube/machines/multinode-802000-m02/id_rsa Username:docker}
	I1204 15:50:56.195685   21215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:50:56.206988   21215 status.go:176] multinode-802000-m02 status: &{Name:multinode-802000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:50:56.207016   21215 status.go:174] checking status of multinode-802000-m03 ...
	I1204 15:50:56.207324   21215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:50:56.207349   21215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:50:56.218439   21215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59877
	I1204 15:50:56.218775   21215 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:50:56.219093   21215 main.go:141] libmachine: Using API Version  1
	I1204 15:50:56.219120   21215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:50:56.219331   21215 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:50:56.219430   21215 main.go:141] libmachine: (multinode-802000-m03) Calling .GetState
	I1204 15:50:56.219522   21215 main.go:141] libmachine: (multinode-802000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:50:56.219590   21215 main.go:141] libmachine: (multinode-802000-m03) DBG | hyperkit pid from json: 20995
	I1204 15:50:56.221004   21215 main.go:141] libmachine: (multinode-802000-m03) DBG | hyperkit pid 20995 missing from process table
	I1204 15:50:56.221046   21215 status.go:371] multinode-802000-m03 host status = "Stopped" (err=<nil>)
	I1204 15:50:56.221055   21215 status.go:384] host is not running, skipping remaining checks
	I1204 15:50:56.221060   21215 status.go:176] multinode-802000-m03 status: &{Name:multinode-802000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.92s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 node start m03 -v=7 --alsologtostderr
E1204 15:51:36.635044   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-802000 node start m03 -v=7 --alsologtostderr: (41.27388919s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (156.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-802000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-802000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-802000: (18.876513021s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-802000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-802000 --wait=true -v=8 --alsologtostderr: (2m17.462783438s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-802000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (156.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-802000 node delete m03: (3.011963908s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-802000 stop: (16.646763769s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-802000 status: exit status 7 (101.752037ms)

                                                
                                                
-- stdout --
	multinode-802000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-802000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr: exit status 7 (101.086262ms)

                                                
                                                
-- stdout --
	multinode-802000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-802000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:54:34.631286   21377 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:54:34.631593   21377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:54:34.631599   21377 out.go:358] Setting ErrFile to fd 2...
	I1204 15:54:34.631603   21377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:54:34.631777   21377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-17258/.minikube/bin
	I1204 15:54:34.631954   21377 out.go:352] Setting JSON to false
	I1204 15:54:34.631976   21377 mustload.go:65] Loading cluster: multinode-802000
	I1204 15:54:34.632023   21377 notify.go:220] Checking for updates...
	I1204 15:54:34.632272   21377 config.go:182] Loaded profile config "multinode-802000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:54:34.632294   21377 status.go:174] checking status of multinode-802000 ...
	I1204 15:54:34.632760   21377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:54:34.632801   21377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:54:34.644096   21377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60109
	I1204 15:54:34.644401   21377 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:54:34.644804   21377 main.go:141] libmachine: Using API Version  1
	I1204 15:54:34.644815   21377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:54:34.645032   21377 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:54:34.645149   21377 main.go:141] libmachine: (multinode-802000) Calling .GetState
	I1204 15:54:34.645250   21377 main.go:141] libmachine: (multinode-802000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:54:34.645322   21377 main.go:141] libmachine: (multinode-802000) DBG | hyperkit pid from json: 21281
	I1204 15:54:34.646434   21377 main.go:141] libmachine: (multinode-802000) DBG | hyperkit pid 21281 missing from process table
	I1204 15:54:34.646472   21377 status.go:371] multinode-802000 host status = "Stopped" (err=<nil>)
	I1204 15:54:34.646484   21377 status.go:384] host is not running, skipping remaining checks
	I1204 15:54:34.646490   21377 status.go:176] multinode-802000 status: &{Name:multinode-802000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 15:54:34.646511   21377 status.go:174] checking status of multinode-802000-m02 ...
	I1204 15:54:34.646778   21377 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1204 15:54:34.646857   21377 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1204 15:54:34.660345   21377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60111
	I1204 15:54:34.661041   21377 main.go:141] libmachine: () Calling .GetVersion
	I1204 15:54:34.661806   21377 main.go:141] libmachine: Using API Version  1
	I1204 15:54:34.661826   21377 main.go:141] libmachine: () Calling .SetConfigRaw
	I1204 15:54:34.662062   21377 main.go:141] libmachine: () Calling .GetMachineName
	I1204 15:54:34.662164   21377 main.go:141] libmachine: (multinode-802000-m02) Calling .GetState
	I1204 15:54:34.662251   21377 main.go:141] libmachine: (multinode-802000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1204 15:54:34.662331   21377 main.go:141] libmachine: (multinode-802000-m02) DBG | hyperkit pid from json: 21303
	I1204 15:54:34.663454   21377 main.go:141] libmachine: (multinode-802000-m02) DBG | hyperkit pid 21303 missing from process table
	I1204 15:54:34.663491   21377 status.go:371] multinode-802000-m02 host status = "Stopped" (err=<nil>)
	I1204 15:54:34.663500   21377 status.go:384] host is not running, skipping remaining checks
	I1204 15:54:34.663503   21377 status.go:176] multinode-802000-m02 status: &{Name:multinode-802000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (123.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-802000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1204 15:54:39.726259   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:55:53.931633   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 15:56:36.644943   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-802000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m3.151350011s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-802000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (123.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-802000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-802000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-802000-m02 --driver=hyperkit : exit status 14 (437.488045ms)

                                                
                                                
-- stdout --
	* [multinode-802000-m02] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-802000-m02' is duplicated with machine name 'multinode-802000-m02' in profile 'multinode-802000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-802000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-802000-m03 --driver=hyperkit : (39.337808918s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-802000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-802000: exit status 80 (306.229118ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-802000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-802000-m03 already exists in multinode-802000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-802000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-802000-m03: (5.373725056s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.53s)

                                                
                                    
x
+
TestPreload (145.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m22.584031327s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-328000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-328000 image pull gcr.io/k8s-minikube/busybox: (1.623985401s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-328000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-328000: (8.426815901s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-328000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-328000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (47.666346519s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-328000 image list
helpers_test.go:175: Cleaning up "test-preload-328000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-328000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-328000: (5.267422436s)
--- PASS: TestPreload (145.76s)

                                                
                                    
x
+
TestSkaffold (115.57s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe254470405 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe254470405 version: (1.651965989s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-692000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-692000 --memory=2600 --driver=hyperkit : (39.064233661s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe254470405 run --minikube-profile skaffold-692000 --kube-context skaffold-692000 --status-check=true --port-forward=false --interactive=false
E1204 16:03:57.022740   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe254470405 run --minikube-profile skaffold-692000 --kube-context skaffold-692000 --status-check=true --port-forward=false --interactive=false: (56.859731172s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6987bbdb78-vk5b2" [c3e284f0-9a76-458b-8d87-612abea0a082] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002980406s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-85f97c56df-8l455" [c92f45fe-ec0a-4169-abcc-1172d3e3dd2f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005233628s
helpers_test.go:175: Cleaning up "skaffold-692000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-692000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-692000: (5.266051856s)
--- PASS: TestSkaffold (115.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1156880906 start -p running-upgrade-042000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1156880906 start -p running-upgrade-042000 --memory=2200 --vm-driver=hyperkit : (53.42349005s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-042000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-042000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (39.279326781s)
helpers_test.go:175: Cleaning up "running-upgrade-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-042000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-042000: (5.260583968s)
--- PASS: TestRunningBinaryUpgrade (99.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (1382.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (54.077992019s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-355000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-355000: (2.37313042s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-355000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-355000 status --format={{.Host}}: exit status 7 (83.055607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1204 16:20:36.986667   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:20:53.909645   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:21:36.618812   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:24:01.919663   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:25:24.992915   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:25:53.895999   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:26:36.606783   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:27:59.697755   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:29:01.919549   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (10m46.882508442s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-355000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (599.974725ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-355000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-355000
	    minikube start -p kubernetes-upgrade-355000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3550002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-355000 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1204 16:30:53.895310   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:31:36.606378   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:34:01.917594   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:35:53.894667   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:36:36.605636   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:37:16.975184   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:39:01.918166   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:40:53.896360   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-355000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (11m12.759393028s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-355000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-355000: (5.333899137s)
--- PASS: TestKubernetesUpgrade (1382.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946977023/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946977023/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946977023/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1946977023/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2073316792/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2073316792/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2073316792/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2073316792/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1070657694 start -p stopped-upgrade-805000 --memory=2200 --vm-driver=hyperkit 
E1204 16:41:36.606602   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1070657694 start -p stopped-upgrade-805000 --memory=2200 --vm-driver=hyperkit : (44.307357287s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1070657694 -p stopped-upgrade-805000 stop
E1204 16:42:04.981369   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1070657694 -p stopped-upgrade-805000 stop: (8.264336239s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-805000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-805000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m12.050362459s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-805000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-805000: (2.408307058s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (588.350346ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-162000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-17258/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-17258/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162000 --driver=hyperkit 
E1204 16:44:01.886718   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162000 --driver=hyperkit : (1m15.151204903s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-162000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E1204 16:44:39.667722   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m5.234413613s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --driver=hyperkit : (15.246229446s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-162000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-162000 status -o json: exit status 2 (169.571543ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-162000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-162000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-162000: (2.405926799s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162000 --no-kubernetes --driver=hyperkit : (19.483290317s)
--- PASS: TestNoKubernetes/serial/Start (19.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-232000 "pgrep -a kubelet"
I1204 16:45:14.912741   17821 config.go:182] Loaded profile config "auto-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6dl66" [ecbc4599-c854-474d-94d2-f7310790d6d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6dl66" [ecbc4599-c854-474d-94d2-f7310790d6d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003707661s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-162000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-162000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (161.114278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-162000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-162000: (2.411675507s)
--- PASS: TestNoKubernetes/serial/Stop (2.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-162000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-162000 --driver=hyperkit : (19.365280223s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (52.655200615s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-162000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-162000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (150.149615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E1204 16:45:53.862767   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:46:36.574847   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (58.875579905s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c9mlh" [336139a4-799d-49b1-8c30-4a438b8f091e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004220471s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-232000 "pgrep -a kubelet"
I1204 16:46:43.112117   17821 config.go:182] Loaded profile config "flannel-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m9vd7" [affdc9aa-90a9-493e-8421-60a83cb14707] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m9vd7" [affdc9aa-90a9-493e-8421-60a83cb14707] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005150213s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-232000 "pgrep -a kubelet"
I1204 16:46:47.161324   17821 config.go:182] Loaded profile config "enable-default-cni-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-htx6l" [b67333ec-45f2-41f4-818b-7a0eb3e94f8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-htx6l" [b67333ec-45f2-41f4-818b-7a0eb3e94f8e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003676837s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (51.802663064s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m19.462105545s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-232000 "pgrep -a kubelet"
I1204 16:48:07.326037   17821 config.go:182] Loaded profile config "bridge-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8t8rd" [4cf37704-eeb3-4341-a7b2-7dce553f6271] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8t8rd" [4cf37704-eeb3-4341-a7b2-7dce553f6271] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004152665s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4xg8m" [05e9a93e-08af-432a-870c-33e0cc944be5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003791793s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (74.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m14.498490232s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (74.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-232000 "pgrep -a kubelet"
I1204 16:48:43.280135   17821 config.go:182] Loaded profile config "kindnet-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d5pzj" [a8a9b789-ede9-4ff4-adcd-37d8d88e82ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d5pzj" [a8a9b789-ede9-4ff4-adcd-37d8d88e82ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004275528s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (54.64376368s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-232000 "pgrep -a kubelet"
I1204 16:49:53.657141   17821 config.go:182] Loaded profile config "kubenet-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sczn2" [63d30f10-cfef-4a2f-b3e4-2e504c07fb33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sczn2" [63d30f10-cfef-4a2f-b3e4-2e504c07fb33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004968581s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-232000 "pgrep -a kubelet"
I1204 16:50:09.060719   17821 config.go:182] Loaded profile config "custom-flannel-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gb9ck" [2cddb474-61c8-4a0e-98fc-cc2e63677db5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1204 16:50:15.063342   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.069745   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.080969   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.103160   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.145386   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gb9ck" [2cddb474-61c8-4a0e-98fc-cc2e63677db5] Running
E1204 16:50:15.226785   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.388065   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:15.709663   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:16.352167   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:17.633804   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004125557s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-232000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E1204 16:50:25.320504   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m6.65164522s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (165.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E1204 16:50:53.861780   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:50:56.044107   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-232000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (2m45.007057981s)
--- PASS: TestNetworkPlugins/group/false/Start (165.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k9tg9" [2034e69f-0c40-4329-87fb-fa9751811412] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00398113s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-232000 "pgrep -a kubelet"
I1204 16:51:36.158764   17821 config.go:182] Loaded profile config "calico-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cqc6l" [c7fa02db-8dde-4980-8135-c7177dc8f29e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1204 16:51:36.571808   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:36.930161   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:36.937111   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:36.948618   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:36.970440   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:37.006207   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:37.013140   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:37.095459   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:37.258968   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:37.580850   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:38.222164   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:39.504479   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:42.066211   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cqc6l" [c7fa02db-8dde-4980-8135-c7177dc8f29e] Running
E1204 16:51:47.188919   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:47.291374   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:47.297730   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004142724s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-232000 exec deployment/netcat -- nslookup kubernetes.default
E1204 16:51:47.310931   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:47.334426   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:51:47.375912   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1204 16:51:47.457158   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1204 16:51:47.618601   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-015000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E1204 16:52:07.791135   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:52:17.914720   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:52:28.273087   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:52:58.877482   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:52:58.928905   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.456792   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.464199   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.475605   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.497788   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.539149   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.620541   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:07.783528   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:08.104791   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:08.746154   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:09.235603   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:10.029248   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:12.590565   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:17.711995   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-015000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m43.589403957s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-232000 "pgrep -a kubelet"
I1204 16:53:25.267824   17821 config.go:182] Loaded profile config "false-232000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-232000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sddd2" [1c22833d-e1c6-49b7-a3a0-44f15485aec6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1204 16:53:27.954002   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sddd2" [1c22833d-e1c6-49b7-a3a0-44f15485aec6] Running
E1204 16:53:37.091711   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:37.098326   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:37.110635   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:37.133503   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:37.175011   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:37.258573   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00398243s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-232000 exec deployment/netcat -- nslookup kubernetes.default
E1204 16:53:37.420494   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-232000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 16:53:56.942193   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:53:57.593211   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:01.884080   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:18.074546   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:20.799297   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:29.397367   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:31.157744   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.2: (1m22.556316361s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-015000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9927267-9251-4d9e-879b-e022e1dec029] Pending
helpers_test.go:344: "busybox" [a9927267-9251-4d9e-879b-e022e1dec029] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9927267-9251-4d9e-879b-e022e1dec029] Running
E1204 16:54:53.790156   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:53.797311   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:53.810291   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:53.831612   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:53.873020   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:53.954680   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:54.116435   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:54.439416   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:55.080960   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:56.362417   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004835623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-015000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-015000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-015000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-015000 --alsologtostderr -v=3
E1204 16:54:58.925286   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:54:59.036958   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:04.047129   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-015000 --alsologtostderr -v=3: (8.427162516s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-015000 -n old-k8s-version-015000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-015000 -n old-k8s-version-015000: exit status 7 (84.485068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-015000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (399.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-015000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E1204 16:55:09.205408   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.212796   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.224348   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.247404   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.289992   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.371972   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.534332   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:09.856210   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:10.498804   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:11.780256   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:14.288390   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:14.342661   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:15.060840   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-015000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m39.463684711s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-015000 -n old-k8s-version-015000
E1204 17:01:47.421917   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (399.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08ea7a85-82b1-4f75-962d-52d1f95d56d4] Pending
helpers_test.go:344: "busybox" [08ea7a85-82b1-4f75-962d-52d1f95d56d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1204 16:55:19.464212   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [08ea7a85-82b1-4f75-962d-52d1f95d56d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00535784s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-836000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-836000 --alsologtostderr -v=3
E1204 16:55:29.705825   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:34.771768   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-836000 --alsologtostderr -v=3: (8.476106521s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (83.420807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-836000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (291.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 16:55:42.769715   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:50.186936   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:51.320210   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:55:53.860318   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:15.733399   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:20.959020   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:29.970751   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:29.977854   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:29.989340   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:30.010541   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:30.052090   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:30.133917   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:30.295942   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:30.619494   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:31.148526   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:31.260977   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:32.542641   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:35.105183   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:36.570836   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:36.929058   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:40.227530   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:47.288965   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:56:50.469890   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:04.765633   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:11.077364   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:15.123948   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:37.781481   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:52.040220   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:57:53.197935   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:07.582131   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.524033   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.531192   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.543055   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.564725   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.608066   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.689807   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:25.851844   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:26.174349   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:26.816789   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:28.099324   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:30.661028   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:35.290310   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:35.782881   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:37.218215   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:45.090342   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:58:46.024712   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:02.010943   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:04.929737   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:06.507029   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:13.966177   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:47.469924   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 16:59:53.920632   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:00:09.334893   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:00:15.190487   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:00:21.627962   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.2: (4m51.215972542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (291.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rs2v8" [ba5c8611-ad32-4002-8e69-03cf9320b6c0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002889759s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rs2v8" [ba5c8611-ad32-4002-8e69-03cf9320b6c0] Running
E1204 17:00:37.044090   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003908323s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-836000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-836000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000: exit status 2 (185.144268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000: exit status 2 (177.803858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-836000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-836000 -n no-preload-836000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-865000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 17:00:53.992796   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:01:09.394982   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:01:19.798096   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:01:30.103777   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-865000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.2: (48.305938177s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-865000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88690983-ec83-4090-acb4-bc04efefee53] Pending
helpers_test.go:344: "busybox" [88690983-ec83-4090-acb4-bc04efefee53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1204 17:01:36.704654   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:01:37.061637   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [88690983-ec83-4090-acb4-bc04efefee53] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003673132s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-865000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-865000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-865000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-865000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-865000 --alsologtostderr -v=3: (8.450576399s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bzkbh" [4b5a4850-d7bc-45ff-b4d2-5d0abf73e18f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002765501s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-865000 -n embed-certs-865000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-865000 -n embed-certs-865000: exit status 7 (83.888377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-865000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-865000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-865000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.2: (4m50.716412806s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-865000 -n embed-certs-865000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bzkbh" [4b5a4850-d7bc-45ff-b4d2-5d0abf73e18f] Running
E1204 17:01:57.812798   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002882948s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-015000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-015000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-015000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-015000 -n old-k8s-version-015000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-015000 -n old-k8s-version-015000: exit status 2 (187.480605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-015000 -n old-k8s-version-015000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-015000 -n old-k8s-version-015000: exit status 2 (185.826396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-015000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-015000 -n old-k8s-version-015000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-015000 -n old-k8s-version-015000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-988000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-988000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.2: (52.942414772s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-988000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [66a55c5c-b9d5-4b26-afde-ac9241955003] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [66a55c5c-b9d5-4b26-afde-ac9241955003] Running
E1204 17:03:07.593091   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004525569s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-988000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-988000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-988000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-988000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-988000 --alsologtostderr -v=3: (8.460496027s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000: exit status 7 (83.243253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-988000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-988000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 17:03:25.534479   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:03:37.226853   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kindnet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:03:53.242731   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:02.020081   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/skaffold-692000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.089259   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.096489   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.109681   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.131716   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.174084   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.257639   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.420247   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:50.743275   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:51.386203   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:52.668324   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:53.928823   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/kubenet-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:04:55.230323   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:00.352906   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:09.345798   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/custom-flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:10.595860   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:15.201542   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.600156   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.606623   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.619583   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.642492   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.684450   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.766732   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:18.928799   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:19.250386   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:19.892037   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:21.173876   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:23.735933   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:28.857595   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:31.079927   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:39.100402   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:54.000367   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/functional-084000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:05:59.584281   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:12.044551   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:30.112353   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/calico-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:36.713881   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/addons-778000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:37.069917   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:38.275125   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/auto-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:06:40.546940   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-988000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.2: (4m54.432472832s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (294.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-795d4" [3264a921-bb45-4c2b-bf25-58898e51ea84] Running
E1204 17:06:47.431942   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003660146s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-795d4" [3264a921-bb45-4c2b-bf25-58898e51ea84] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003896856s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-865000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-865000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-865000 -n embed-certs-865000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-865000 -n embed-certs-865000: exit status 2 (180.710514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-865000 -n embed-certs-865000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-865000 -n embed-certs-865000: exit status 2 (186.225847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-865000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-865000 -n embed-certs-865000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-865000 -n embed-certs-865000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-966000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 17:07:33.968581   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/old-k8s-version-015000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-966000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.2: (41.851074378s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-966000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-966000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041484656s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-966000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-966000 --alsologtostderr -v=3: (8.432242191s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-966000 -n newest-cni-966000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-966000 -n newest-cni-966000: exit status 7 (87.869899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-966000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-966000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.2
E1204 17:08:00.148145   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/flannel-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:08:02.470648   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/no-preload-836000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:08:07.600831   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/bridge-232000/client.crt: no such file or directory" logger="UnhandledError"
E1204 17:08:10.506174   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/enable-default-cni-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-966000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.2: (30.036947315s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-966000 -n newest-cni-966000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-twzhb" [3e08cd38-0896-44b4-bfc1-66d56db03614] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003086913s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-twzhb" [3e08cd38-0896-44b4-bfc1-66d56db03614] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003590248s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-988000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-988000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-988000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000: exit status 2 (213.592414ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000: exit status 2 (224.125522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-988000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
E1204 17:08:25.542010   17821 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20045-17258/.minikube/profiles/false-232000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-988000 -n default-k8s-diff-port-988000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-966000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-966000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-966000 -n newest-cni-966000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-966000 -n newest-cni-966000: exit status 2 (186.090959ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-966000 -n newest-cni-966000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-966000 -n newest-cni-966000: exit status 2 (192.524503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-966000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-966000 -n newest-cni-966000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-966000 -n newest-cni-966000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.05s)

                                                
                                    

Test skip (22/324)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port257767168/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (176.521355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:49.099234   17821 retry.go:31] will retry after 729.445846ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (147.42699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:49.976407   17821 retry.go:31] will retry after 466.640282ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (144.076244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:50.587469   17821 retry.go:31] will retry after 940.310791ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (149.251101ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:51.677619   17821 retry.go:31] will retry after 2.238745947s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (145.840508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:54.063118   17821 retry.go:31] will retry after 3.571661201s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (147.072827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 15:26:57.782470   17821 retry.go:31] will retry after 4.401116647s: exit status 1
2024/12/04 15:26:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (174.842456ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-084000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-084000 ssh "sudo umount -f /mount-9p": exit status 1 (155.333022ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-084000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-084000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port257767168/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.72s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-232000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-232000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-232000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-232000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-232000"

                                                
                                                
----------------------- debugLogs end: cilium-232000 [took: 6.096648879s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-232000
--- SKIP: TestNetworkPlugins/group/cilium (6.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-811000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard